perm filename REPORT[RDG,DBL]2 blob sn#540824 filedate 1980-10-15 generic text, type C, neo UTF8
COMMENT ⊗   VALID 00013 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00003 00002	∂Mary at Rand-Unix 	Subject:EXPERT SYSTEMS AGENDA  21 Aug 1980 at 0939-PDT
C00013 00003	From: Mary at Rand-Unix		ESW Handouts   9 Sep 1980 at 1341-PDT
C00015 00004		List of files given out to ESKErs at conference:
C00017 00005	∂3 Sep 1980 1448-PDT	<PLONDON at USC-ISIB>	 Comments on draft reports
C00025 00006	∂24 Sep 1980 1054-EDT From: WEISS at RUTGERS 	Subject: revised spill
C00054 00007	∂Aiello at SUMEX-AIM  - AGE experiment report and facts
C00089 00008	∂24-Sep-80  2248	<PLONDON at USC-ISIB>: Our facts, report to follow soon!
C00102 00009	∂24-Sep-80  2153	John.McDermott at CMU-1  report on experiment: ops5
C00138 00010	∂25-Sep-80  1246	RDG  	Report of the ESW Oil Spill Effort
C00159 00011	∂25-Sep-80  1300	RDG  Representing various things in RLL
C00217 00012	∂25 September 1980 1628-EDT  John.McDermott 	description of ops5
C00229 00013	∂9 Oct 1980 1558-PDT	Lee Erman <ERMAN at USC-ISIB> 	Hearsay-III report.
C00246 ENDMK
C⊗;
∂Mary at Rand-Unix 	Subject:EXPERT SYSTEMS AGENDA  21 Aug 1980 at 0939-PDT
To: duda at Sri-Kl, reboh at Sri-Kl, lenat at Sumex-Aim
To: greiner at Sumex-Aim, nii at Sumex-Aim, aiello at Sumex-Aim
To: stan at Sri-Kl, Gorlin at Rand-Unix, erman at Usc-Isie
To: feigenbaum at Sumex-Aim, plondon at Usc-Isie, weiss at Rutgers
To: politakis at Rutgers, scott at Sumex-Aim, vanmelle at Sumex-Aim
cc: Rick at Rand-Unix, Don at Rand-Unix, Sarna at Rand-Unix
cc: Mary at Rand-Unix

Dear Members of Knowledge Engineering:

	Following is the latest agenda of the Expert Systems Workshop
this weekend.  Please note that your first meeting will be on Saturday
afternoon at 3:30.

					Mary Shannon
					Cathy Sarna


------------------------------------------------------------------------------
			  SCHEDULE OF MEETINGS

								 ROOMS
SATURDAY  August 23

 3:30-5:00     Briefing with chairmen, Knowledge Engineering
	       group (both team captains and wizards), and
	       introduction to the mystery expert.              Mahina

 Evening unscheduled

		  ---------------------------------
								ROOMS

SUNDAY   August 24

 9:00-10:00    KE group organizational meeting                  Mahina

10:30-12:30    KE group meeting:  Interview expert              Makai

12:30-1:30     Lunch  unscheduled

 1:30-5:30     KE group meeting: Interview expert               Makai

 7:00-10:00    Reception and buffet for all participants        Mauka/Mahina





		  ---------------------------------
								ROOMS

MONDAY   August 25

 8:30-8:45     Introductory remarks by chairman                 Mauka/Mahina

 8:45-12:00    Position papers by group leaders:
	       Description and summary of positions
	       of each group
	       (20-25 minutes each with 5-10 minute reactions)  Mauka/Mahina

12:00-1:30     Lunch  arranged                                  Makai

 1:30-5:30     KE group meeting:  Interview expert.             Mauka/Mahina

 1:30-2:00     Working group organizational meetings

				Definition                      ←←←←←←←←←←

				Knowledge Acquisition           ←←←←←←←←←←

				Architecture                    ←←←←←←←←←←

				Meta-Cognition                  ←←←←←←←←←←

				Performance                     ←←←←←←←←←←


 2:00-2:30     Break

 2:30-5:30     Working group meetings

				Definition                      ←←←←←←←←←←

				Knowledge Acquisition           ←←←←←←←←←←

				Architecture                    ←←←←←←←←←←

				Meta-Cognition                  ←←←←←←←←←←

				Performance                     ←←←←←←←←←←


 7:00-10:00     Banquet                                         Makai

 9:00-9:30     Presentation by Knowledge Engineering
		group leader                                    ←←←←←←←←←←

		←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
								ROOMS




TUESDAY  August 26

 8:30-12:30     Knowledge Engineering group meeting             Mauka/Mahina

		Working Group meetings for:

			Definition                              ←←←←←←←←←←

			Knowledge Acquisition                   ←←←←←←←←←←

			Architecture                            ←←←←←←←←←←

12:30-1:30     Lunch  unscheduled


 1:30-5:30     Knowledge Engineering group meeting              Mauka/Mahina

		Working groups meeting:

			Meta-Cognition                          ←←←←←←←←←←

			Performance                             ←←←←←←←←←←

 6:00-8:00     Dinner  arranged                                 Makai

 8:00-10:00     Knowledge Engineering meeting                   Mauka/Mahina

		Working group meetings or leader preparation
		of revised outline:

			Definition                              ←←←←←←←←←←

			Knowledge Acquisition                   ←←←←←←←←←←

			Architecture                            ←←←←←←←←←←

			Meta Cognition                          ←←←←←←←←←←

			Performance                             ←←←←←←←←←←


------------------------------------------------------------------------------
								ROOMS

WEDNESDAY  August 27

 9:00-12:30     General Meeting
		  Group leaders each lead 1/2 hour
		  discussion describing their revised
		  outline of the paper.                         Mauka/Mahina

12:30-1:30     Lunch  arranged                                  Makai





 1:30-5:30     Knowledge Engineering meeting                    Mauka/Mahina

		Working group meetings

			Definition                              ←←←←←←←←←←

			Knowledge Acquisition                   ←←←←←←←←←←

			Architecture                            ←←←←←←←←←←

			Meta Cognition                          ←←←←←←←←←←

			Performance                             ←←←←←←←←←←

 Dinner unscheduled

 Evening unscheduled

		  --------------------------------------
								ROOMS


THURSDAY  August 28


 9:00-12:00     Knowledge Engineering group
		  Programing results integrated into
		  outline                                       Mauka/Mahina

		Working group meetings
		  Prepare final outlines

				Definition                      ←←←←←←←←←←

				Knowledge Acquisition           ←←←←←←←←←←

				Architecture                    ←←←←←←←←←←

				Meta-Cognition                  ←←←←←←←←←←

				Performance                     ←←←←←←←←←←


12:00-1:00     Lunch unscheduled

 1:00-4:15     General Meeting
		  Group leaders each lead 30 minute discussion
		  describing their revised chapter outline.
		  Knowledge Engineering group leader describes
		  progress.                                     Mauka/Mahina

		OFFICIAL CONCLUSION OF WORKSHOP
		  Only group leaders, the Knowledge Engineering
		  group team captains, and chairmen remain.





 5:30-7:00     Dinner  arranged                                 Makai

 7:00-8:30     Group leaders and team captains meeting.
		  Discuss ways to integrate Knowledge
		  Engineering results with other chapters.      Mauka/Mahina

 8:30-11:00     Writing
		  Group leaders fill out their outlines.

		   -------------------------------------
								ROOMS

FRIDAY  August 29


 9:00-12:00     Wrap-up meeting
		  Group leaders and chairmen discuss chapter
		  formats and procedures for completing book.   Makai


-------

From: Mary at Rand-Unix		ESW Handouts   9 Sep 1980 at 1341-PDT
To: People at Rand-Unix
cc: Mary at Rand-Unix

Dear Participants:

	If you did not receive a copy of any of the following
	papers from the Expert Systems Workshop (handed out on
	Thursday), please indicate by number which you would
	like to have sent, along with your current address.

	If you need any further information, let me know.

				Mary at Rand-Unix


[Here was list of things]

                ---------------
-------

Mailed to Mary@RAND-UNIX  16:55 11-Sept
Mary,
	I am, apparently, not on your mailing list for the ESW messages.
(I found out about it from Doug Lenat.)  Could you please add me in?

Also, could you send me the following team reports:
 1. (RLL:EURISKO)
 2. (RLL)
 3. (EXPERT)
 4. (RLL:SPILL)
13. (AGE)
14. (ROSIE)
15. (EMYCIN)

Thanks,
	Russ Greiner
	Margaret Jack Hall
	Stanford University
	Stanford, CA 94305

I can reached as (preferably) RDG@SAIL, (or CSD.GREINER@SCORE).
	List of files given out to ESKErs at conference:

 1.  Summary of knowledge base EURISKO
     Doug Lenat

 2.  RLL: A Representation Language Language
     Doug Lenat and Russ Greiner
     Report of the ESW Oil Spill Effort

 3.  Review of the Spill Problem as Posed in EXPERT
     Sholom M. Weiss

 4.  Summary of Knowledge Base SPILL
     Doug Lenat

 5.  ROSIE Report
     Stan Rosenschein

 6.  SPILL-KNOWLEDGESOURCES in AGE
     Nelleke Aiello

 7.  KAS/PROSPECTOR
     R. O. Duda and Rene Reboh

 8.  Report on Experiment: OPS5
     C. Forgy and J. McDermott

 9.  EMYCIN
     Bill van Melle and Carli Scott

10.  Hearsay-III Report
     Lee Erman and Philip London

11.  Model Files for KAS/Prospector Oil/Chemical Spill System
     R. O. Duda and Rene Reboh

12.  LISP Report
     Stan Rosenschein

13.  AGE Team Progress Report
     Nelleke Aiello and Penny Nii

14.  ROSIE application (begins "Procedure Coordinate")
     Dan Gorlin and Stan Rosenschein

15.  Rule Index by Parameter
     C. Scott
∂3 Sep 1980 1448-PDT	<PLONDON at USC-ISIB>	 Comments on draft reports
To: KEGroupees: ;

                                                               3 September 1980

                        REMARKS ABOUT THE DRAFT REPORTS

                                  PHIL LONDON

                                      ISI

As you have all surely observed, not only did we consider different problems to
be solved, but we produced very different types of descriptions of those
solutions -- from no description at all, to code, to fairly extensive prose.

The best way to approach this effort of compiling our results is to start out
with prose descriptions generated in response to a uniform set of questions --
namely those Dave suggested.  They are:

   1. Problems and subproblems considered.

   2. Resulting design.

   3. Description of the development process.

   4. Strengths of your tool with respect to this problem.

   5. Weaknesses of your tool with respect to this problem.

   6. New perceptions.

Let me elaborate these more fully so that we can address the same issues.

1. Problems and subproblems considered

Most of us worked on two problems.  The first was the entire problem of crisis
management for chemical and oil spill emergencies at ORNL.  The second was the
more specific problem we called "source location".  Each of the teams addressed
different sets of issues, and produced different (partial) solutions to each of
these problems.

For the problems above, describe the scope of the actual problems that you
addressed.  What assumptions did you make (e.g., only one observer to allocate,
complete inventory data, no intermittent spills)?  What limitations were
imposed on you (by your tools, by limited time)?  How strongly was your choice
of problems to consider determined by your tool, by time constraints, other
factors?

2. Resulting design

Again, please describe your design for both the source location problem and for
the overall crisis management problem.  For the source location problem,
abstract your description to a level above that of code (your code, though
helpful, is not the most suitable means for communicating your design).  For
the crisis management problem, please describe how your design was to deal with
what you considered to be the difficult problems (e.g., interacting subgoals,
resource allocation (people, equipment), representation of the underlying
physical reality).

3. Design and development process

It would be interesting to hear about false starts, dead ends, seemingly
intractable problems that were tabled, etc.  An approximate timetable would be
helpful also.

4. Strengths

There are two classes of strengths (and weaknesses) that can be mentioned here.
These are (1) strengths of your tool in general (e.g., friendly user interface)
and (2) particular aspects of this problem that elicited strengths of your
system (e.g., ability to represent static relational information such as the
drainage system map).  Address these issues for both the source location
problem and the overall crisis management problem.

An interesting point is that some of the tools were evaluated by referring to
the notions of "deep" or "shallow" reasoning, or "deep" or "shallow" problems,
and concluding that a tool is appropriate for one or the other.  I'm not sure
that this is all that meaningful of a characterization.  Can you comment on
this?

5. Weaknesses

   

6. New perceptions

Did you learn anything about your tool or knowledge engineering by working this
problem?



---------------

The agreed upon date for the revised reports is Sept 19.

Comments are solicited!

   

   Phil
-------
                ---------------
-------

∂24 Sep 1980 1054-EDT From: WEISS at RUTGERS 	Subject: revised spill



                Review of the Spill Problem as Posed in EXPERT


                     Sholom M. Weiss and Peter Politakis




        Introduction
        ------------

        This report includes our responses to the six questions,  a transcript

produced at  the workshop  of a  sample consultation  session, and  the EXPERT

model for the spill problem.




        Problems and Subproblems Considered
        -----------------------------------

        Although we  were asked to  emphasize the source  location subproblem,

this still left  much room for varying  interpretations of what  constituted a

problem and a solution. We concentrated on producing a prototype which gave an

overview of the essential elements of a consultant for the spill  problem. We,

therefore, addressed the  overall spill problem  with special emphasis  on the

source location subproblem.  We viewed  the users of the consultant  system as

being intelligent  but not  expert people who  might need  advice. We  did not

consider the users to be robots, whose every action needs to be  monitored. In

our view, many of the details of an overall solution could be filled in later,

but it would be worthwhile to cover all the major operational goals:

1. Spill Discovery

2. Spill Characterization

   2.1 Emergency, Hazards

   2.2 Source, Location
!


   2.3 Material, Name

   2.4 Flow, Volume

3. Regulation Violation Analysis

4. Notification

5. Countermeasures on Spill Flow

   5.1 Containment Trapping

   5.2 Cleanup

   5.3 Mitigation


We were asked to implement a "map backtrack" procedure which  involved tracing

through  a  map  of  the  ORNL  drain  system.   We  did  not   consider  "map

backtracking" to be a significant problem; the implementation of  this subtask

seemed somewhat unnecessary for an early prototype.  It appeared to be more of

a  technician's role  in performing  a measurement  that an  expert consultant

might request.  However, we did implement the "backtrack" procedure.




        Resulting Design
        ----------------

        The  model  was designed  using  the usual  EXPERT  representation and

methods.  The model consists of three major elements:

    (a)  hypotheses (the set of potential interpretations),

    (b)  findings (the set of possible questions),

    (c)  rules (a set of production rules relating hypotheses and findings).


The transcript contains  a sample session and  the spill model.   In reviewing





                                      2
!


the material, we did not find much complex reasoning, and this is reflected in

the model which we produced.   Confidence measures, which are usually  used to

reason in an  uncertain environment, played  a relatively unimportant  role in

the model.  A very simple FORTRAN program was used to implement the map trace.

The  EXPERT code  was not  altered.  EXPERT  provides for  the  possibility of

suspending  execution  and  calling  other  programs.   Communication  between

programs  is  essentially transparent  to  the user.   We  considered  the map

program to  be a relatively  independent task that  could be represented  as a

procedure returning a number, i.e.  the indicated source drain code.

        The  model  was  designed  with the  expectation  that  the  user will

frequently ask  for an interpretation  in the middle  of questioning  and will

often  revise responses  to  previous questions.   For example,  the  user may

report  that  DEM  has  not  been  notified,  and  the   program's  sequential

interpretation may be to notify DEM, which may result in a changed response to

the question about notification  of various agencies.  This is  illustrated in

the transcript.  In this respect, the design of the model was unusual. In most

models we  have previously  developed, the  usual case  is that  responses are

changed because they have been erroneously reported.  It was also  unusual for

us to design a model that required such dynamically varying sets of sequential

interpretations.




        Design and Development Process
        ------------------------------

        We were presented with a great deal of well-prepared  material.  Given





                                      3
!


the extremely short time frame, we did not consider interviewing the expert as

essential.  The critical element in the development of the prototype model was

the model design, not the actual implementation.  The basic design was arrived

at by the first evening and implementation and model refinement  continued for

the next two days. We began  by reading the material provided by  the experts.

We  were not  concerned  with producing  a  very detailed  model.  Instead, we

realized that  it would be  best to produce  a running prototype,  which would

leave room  for further development.  This in fact  is probably what  we would

have done even if we had  additional time. A prototype such as this  can serve

as a point  of reference for  the expert to view  the general approach  to the

problem. It would be his role to comment on the direction the model is taking.

By  have  something  running,  albeit  incomplete,  the  knowledge acquisition

process, is likely  to proceed more  rapidly, than by  working out a  model in

great detail. This in some sense  is a form of top-down design.   Sholom Weiss

developed the  overall EXPERT spill  model, while Peter  Politakis implemented

the map backtracking program.




        Strengths
        ---------

        A major design goal of EXPERT  has been to make the system as  easy to

use and  thereby allow  for the  rapid development  of prototype  models.  The

development  of a  prototype seems  to be  a major  means of  useful knowledge

acquisition. For those problems  which can be cast as  classification problems

(i.e. composed of prespecified lists of conclusions and observations), we feel





                                      4
!


that EXPERT  can be  a very convenient  tool. The  hazardous spill  problem is

another experience which supports this belief.




        Weaknesses
        ----------

        None noted in this problem.  We were pleased with the  model developed

for the spill problem.




        New Perceptions
        ---------------

        By working  on this problem,  we have gained  support for  the feeling

that EXPERT can  possibly be utilized  in many problems  for which it  was not

originally  intended.  While  the system  was  originally  designed  to handle

classification problems, we  begin to see  the potential for  modifications to

the  system to  handle problems  which cannot  be completely  characterized as

classification problems.  The mechanism which allows the suspension  of EXPERT

in mid-execution and the calling  of another program which can  supply results

to EXPERT,  is a step  in this  direction. This mechanism  worked well  in the

spill problem.















                                      5

!
[PHOTO:  Recording initiated  Tue 26-Aug-80 7:41PM]

LINK FROM WEISS, TTY 271

 TOPS-20 Command processor 3A(172415)
 End of COMAND.CMD.4
@expert

                       -- EXPERT Consultation System --



Enter File Name: spill

Type ? for a summary of valid responses to any question asked by the program.


Enter Name or ID Number: C. Johnson

Case Type: (1)Real (2)Hypothetical  *2

Enter Date of Visit: 8/26/80

Enter Initial Findings (Press RETURN to begin questioning):
*

  1. type of spill:
       1) source
       2) containment
       3) stream
     Choose one:
     *3

  2. Agencies notified:
       1) DEM
       2) ORO-DOE
     Checklist:
     *n

  3. initial spill location drain code m6:
     *dx




INTERPRETIVE ANALYSIS
---------------------
     Notify DEM of spill Discovery.
     Source and location must be determined to halt spill.
     The chemical has not yet been identified.
     Path flow and volume should be determined to evaluate success of cleanup
     and potential propagation pattern.
     "Containment is usually the first priority of the OSC." For oil
     floating on water, booms and absorbent material are indicated. Cleanup and
     mitigation depend on the specific causes of the spill.


  3. initial spill location drain code m6:
     *fix 2

FIX: Agencies notified:
       1) DEM
       2) ORO-DOE
     Checklist:
     *1

  3. initial spill location drain code m6:
     *5

  4. material type:
       1) oil - film or sheen
       2) chemical
     Choose one:
     *1

  5. initial known spill characteristics:
       1) source, location
       2) identity of spill material
       3) flow or volume
       4) emergency, hazards
     Checklist:
     *n

  6. spill flow:
       1) continuous
       2) intermittent/stopped
     Choose one:
     *1

  7. source drain code:
     *dx




INTERPRETIVE ANALYSIS
---------------------
     Notify ORO-DOE of possible violation.
     Backtracking is indicated to determine next drain basin to examine. Type
     RUN(map).
     Source and location must be determined to halt spill.
     The chemical has not yet been identified.
     Path flow and volume should be determined to evaluate success of cleanup
     and potential propagation pattern.
     "Containment is usually the first priority of the OSC." For oil
     floating on water, booms and absorbent material are indicated. Cleanup and
     mitigation depend on the specific causes of the spill.


  7. source drain code:
     *fix 2

FIX: Agencies notified:
       1) DEM
       2) ORO-DOE
     Checklist:
     *1,2

  7. source drain code:
     *run(map)

[Running Program.
         Is spill observed at DRAIN M6-  11 (Y/N): n

         Is spill observed at DRAIN M6-   6 (Y/N): n

         Is spill observed at DRAIN M6-   8 (Y/N): n

BACKTRACKING RESULTS
--------------------
Source spill is near DRAIN M6-   5
Potential buildings/grounds source is  3595...Done]


  8. material code:
     *sum


SUMMARY
-------

Name: C. Johnson          [HYP]
Case   1: Visit   1    Date: 8/26/80  

type of spill:
    stream

Agencies notified:
    DEM
    ORO-DOE

initial spill location drain code m6: 5

material type:
    oil - film or sheen

spill flow:
    continuous

source drain code: 5

source building/grounds code: 3595


  8. material code:
     *568

  9. spill volume:
     *50

 10. effluent discharge sample average mg/l:
     *7

 11. hazards:
       1) health
       2) fire
       3) reaction
       4) epa index
       5) oil pcb content
     Checklist:
     *n

              ...............................................


SUMMARY
-------

Name: C. Johnson          [HYP]
Case   1: Visit   1    Date: 8/26/80  

type of spill:
    stream

Agencies notified:
    DEM
    ORO-DOE

initial spill location drain code m6: 5

material type:
    oil - film or sheen

spill flow:
    continuous

source drain code: 5

source building/grounds code: 3595

material code: 568

spill volume: 50

effluent discharge sample average mg/l: 7




INTERPRETIVE ANALYSIS
---------------------
     source is known.
     substance is identified.
     spill volume is known.
     hazard analysis has been performed.
     "Containment is usually the first priority of the OSC." For oil
     floating on water, booms and absorbent material are indicated. Cleanup and
     mitigation depend on the specific causes of the spill.
     Currently in compliance with fresh water pollution act. (Spill may be
     contained.)

Command Mode: FIX x, WHY, DX, SUM, NEW, ASK, QUIT, etc, ? for HELP

:q

Would you like to SAVE this visit?  *n

[DONE]
@pop

[PHOTO:  Recording terminated  Tue 26-Aug-80 7:48PM]
!/ oil and chemical spill model
**hypotheses
*taxonomy
dem	Notify DEM of spill Discovery.
viol	Notify ORO-DOE of possible violation.
Compl	Currently in compliance with fresh water pollution act.+
	(Spill may be contained.)
Ncomp	Currently not in compliance with fresh water pollution act.
Cntn	"Containment is usually the first priority of the OSC."+
	For oil floating on water, booms and absorbent material are+
	indicated. Cleanup and mitigation depend on the specific causes+
	of the spill.
dviol	Determine whether a noncompliance violation exists.
sloc	Source and location must be determined to halt spill.
Name	The chemical has not yet been identified.
Haz	Emergency hazards analysis should be performed. References are:+
	hazards expert, container label, OHMTADS, source supervisor,+
	source user.
Volfl	Path flow and volume should be determined to evaluate+
	success of cleanup and potential propagation pattern.
Back	Backtracking is indicated to determine next drain basin to+
	examine. Type RUN(map).
BuldB	Lookup buildings in indicated basin area.
Chanl	Chemical analysis is required. Backtracking is not feasible.+
	Indirect methods are necessary to locate spill source.
BuldC	Lookup buildings containing identified chemical.
Svolm	Estimate spill volume. rule out sources with inventory volume+
	less than estimated spill volume. Examine remaining+
	potential sources.

*causal or intermediate hypotheses
Srce	source is known.
Matrl	substance is identified.
volum	spill volume is known.
Hzrd	hazard analysis has been performed.

*print control/min=.8
dem,viol,dviol,back,chanl,buldc,buldb,svolm,sloc,+
name,haz,volfl,srce,matrl,volum,hzrd,cntn,compl,ncomp


**findings
*begin questionnaire*
*multiple choice
type of spill:
src	source
cntm	containment
strm	stream
*checklist
Agencies notified:
Dem	DEM
Oro	ORO-DOE

*numerical
Isloc	initial spill location drain code m6:

*multiple choice
material type:
oil	oil - film or sheen
chem	chemical
*checklist
initial known spill characteristics:
hsrlc	source, location
hmat	identity of spill material
hflvl	flow or volume
hhaz	emergency, hazards

*numerical
hsdcd	known source drain code:
*numerical
hsbcd	known source building/grounds code:

*numerical
hmcod	known material code:
*numerical
hspvl	known spill volume:
*numerical
hefds	effluent discharge sample average mg/l:
*checklist
known hazards:
hhlth	health
hfire	fire
hract	reaction
hepa	epa index
hopcb	oil pcb content

*multiple choice
spill flow:
cntin	continuous
stop	intermittent/stopped

*numerical
sdcod	source drain code:
*numerical
sbgcd	source building/grounds code:

*numerical
mcode	material code:
*numerical
spvol	spill volume:
*numerical
efdis	effluent discharge sample average mg/l:
*checklist
hazards:
helth	health
fire	fire
react	reaction
epa	epa index
olpcb	oil pcb content
*end questionnaire

**rules

/ simple true, false, unknown rules for question sequencing
*ff
f(src,t) -> f(hsrlc:hhaz,t)
f(hsrlc,f) -> f(hsdcd:hsbcd,u)
f(hsrlc,t) -> f(cntin:sbgcd,u)
f(hmat,f) -> f(hmcod,u)
f(hmat,t) -> f(mcode,u)
f(hflvl,f) -> f(hspvl:hefds,u)
f(hflvl,t) -> f(spvol:efdis,u)
f(hhaz,f) -> f(hhlth:hopcb,u)
f(hhaz,t) -> f(helth:olpcb,u)

/ rules relating findings to hypotheses.
/ the confidence measures have no significance in this model.
/ rules of the form [n:... indicate that n findings must be satisfied.
*fh
[1:f(hsbcd,1:*),f(sdcod,1:*)] -> h(srce,.9)
[1:f(hmcod,1:*),f(mcode,1:*)] -> h(matrl,.9)
[1:f(hspvl,1:*),f(spvol,1:*)] -> h(volum,.9)
[1:f(hhaz,t),f(helth,f),f(helth,t)] ->h(hzrd,.9)
[1:f(src,t),f(cntm,t),f(strm,t)] & f(dem,f) ->h(dem,.9)
[1:f(oil,t),f(chem,t)] &f(oro,f) -> h(viol,.9)

[1:f(hefds,0:10),f(efdis,0:10)] -> h(compl,.9)
[1:f(hefds,10.1:*),f(efdis,10.1:*)] -> h(ncomp,.9)
[1:f(src,t),f(cntm,t),f(strm,t)] ->h(cntn,.9)
f(hefds,u) & f(efdis,u) -> h(dviol,.9)

f(cntin,t) -> h(back,.9)
f(sdcod,1:*) -> h(back,-1)
f(stop,t) -> h(chanl,.9)
f(sdcod,1:*) -> h(chanl,-1)
f(sdcod,1:*) -> h(buldb,.9)
[1:f(hmcod,1:*),f(mcode,1:*)] -> h(buldb,-1)
f(stop,t) -> h(buldc,.9)
f(sdcod,1:*) -> h(buldc,-1)
f(stop,t) -> h(svolm,.9)
f(sdcod,1:*) ->h(svolm,-1)

/ hypotheses to hypotheses rules
/ spill characteristics yet to be determined
*hh

*if 
[1:f(src,t),f(cntm,t),f(strm,t)]
*then
h(srce,-1:0) -> h(sloc,.9)
h(matrl,-1:0) -> h(name,.9)
h(volum,-1:0) -> h(volfl,.9)
h(matrl,.1:1)&h(hzrd,-1:0) -> h(haz,.9)
*end
!

00010	C************ Map Backtrack Program***************
00020		SUBROUTINE USER
00030		dimension mh(46,5),mhb(46,3),ipath(25)
00040		call ifile(1,'map')
00050		read(1,2)mh
00060		read(1,2)mhb
00070	2	format(1000i)
00080		call getm('isloc',cm)
00090		ICM=CM
00100		LAST=1
00110		IPATH(LAST)=ICM
00120	50	ISON=2
00130		NSON=MH(ICM,1)
00140		IF (NSON.EQ.0)GO TO 120
00150		NSE=ISON+NSON-1
00160		DO 100 I=ISON,NSE
00170		WRITE(5,500) MH(ICM,I)
00180		READ(5,501)ANS
00190		IF (ANS.EQ.'Y'.or.ans.eq.'y') GO TO 200
00200	100	CONTINUE
00210	120	IB=2
00220		nb=mhb(icm,1)
00230		write(5,504)
00240		write(5,505)
00250		WRITE(5,503)ICM
00260		CM=ICM
00270		CALL PUTM('SDCOD',CM)
00280		IF (NB.EQ.0)RETURN
00290		NBE=IB+NB-1
00300		CM=MHB(ICM,ib)
00310		CALL PUTM('SBGCD',CM)
00320		DO 150 I=IB,NBE
00330		write(5,502) mhb(icm,i)
00340	150	continue
00350		RETURN
00360	200	last=last+1
00370		IPATH(LAST)=MH(ICM,I)
00380		ICM=MH(ICM,I)
00390		GO TO 50
00400	500	FORMat(10X,'Is spill observed at DRAIN M6-',i4,' (Y/N): ',$)
00410	501	FORMAT(A1)
00420	502	format(1x,'Potential buildings/grounds source is',i6)
00430	503	format(1x,'Source spill is near DRAIN M6-',i4)
00440	504	format(1x,'BACKTRACKING RESULTS')
00450	505	format(1x,'--------------------')
00460		END
-------
                ---------------
-------

∂Aiello at SUMEX-AIM  - AGE experiment report and facts


 		          REPORT OF EXPERIMENT:  AGE


	                 Nelleke Aiello and Penny Nii

	                     Stanford University

	                       September 1980



!
Problem Formulation

	The main thrust of the AGE team's effort to solve the 

spill problem has proceded in two areas.

	1.  To collect, organize, and maintain information
	    in the form of a "situation board", and


	2.  to act as advisor to the on-scene coordinator,
	    to warn of possible hazards and advise on actions
	    to take.


	The parts of the problem we addressed include an overall
design for managing many interacting sources of knowledge; specific
strategies for dealing with emergency spill situations, determining
what to do first based on minimal, incomplete knowledge; and ways of
handling user interupts and other "real world" constraints.
In addition, we decided to develope an "action KS" to handle
the subproblem of source location.
	Because of the limited time available for design and program
development, we ignored large parts of the spill problem, including
resource allocation, representation of static knowledge (ex. the
drainage basin and inventories), natural language recognition or 
interpretation of user responses, and many others.

!Resulting Design
	We chose to implement our "solution" in the Blackboard
Model framework.  Within the Blackboard framework, AGE provides
several control options.  For this application, we selected a
standard EVENT-DRIVEN strategy, with a FIFO, first-in-first-out,
event selection method.  In AGE, procedural domain knowledge is represented 
in sets of production rules called knowledge sources.  We designed  an 
overall configuration of knowledge sources, outlining their contents and 
inter-relationships. We began implementing the
knowledge sources from the top-down, working toward the knowledge
sources dealing with the source location problem.  We also implemented
a rough draft of the hypothesis structure to be used as
the blackboard, from which the "situation board" is created
and updated.

	More specifically, we implemented an initial knowledge source

which asks a few basic questions of the observer.  This KS contains rules

for determining the emergency actions to be taken, warnings, containment

suggestions, and notification suggestions.  Next, a further assessment

takes place, determining missing information and setting up goals to find

that information, such as searching for the source of the spill.  The

system then goes off and attempts to satisfy those goals.  Whenever new,

unexpected, information becomes available, we can return to do more

assessment, possibly reordering priorities.
!
	With respect to the source location problem, if backtracking

is possible, control goes to the Backtrack-Monitor KS.  This KS suggests

the next check points, understands responses from the observer about

which direction the spill is coming from, keeps track of the observer's

location, stops backtracking if the "trail" stops, and generates a list

of possible end points (storm drains and/or buildings.)  This list can

then be compared with inventories or used to call building supervisors.


!
Retrospective

	The AGE team spent one evening trying to understand

the overall problem of inland water spill control and prevention.

The following day we interviewed the experts, trying to get them

to specify more exactly what part or parts of the overall problem

would be most aided by an "expert" computer system.  We also

asked a few questions to fill-in gaps in our understanding of

the domain knowledge.  Our next step was to draw up an imaginary

protocol of the interaction between an "expert" system and an OSC

during a spill emergency.  We used this script to guide the early

design of the knowledge source configuration.  Then we showed

both the script and the KS configuration to one of the experts

for his comments, trying to determine if we were headed in the

right direction.  Finally we spent a day coding, implementing

the following:

	1.  a hypothesis structure very similar to the parameters

	    described in the spill documentation,

	2.  eight knowledge sources, some bare skeletons, and

	    others in more detail,

	3.  the control information, using an EVENT-DRIVEN macro,

	    and

	4.  some user functions.

The total amount of time spent designing and coding was approximately

12 hours.

!
	On reflection, we decided that a mixed event-driven and expectation

(model) driven control strategy would be better suited to this

application.  The problem requires the system to accept and

immediately act upon new data.  We think this can be accomplished

(but have not implemented this) by setting up models for

expected data, with actions to be taken when the expectation

becomes true.  Unrequested user provided data and/or

instructions could be handled by a general expectation which

would invoke a knowledge source to interprete the new data, like

the WAITING KS which we partially implemented.

!
Strenghts

	AGE is a very useful tool in building expert systems--

provided the problem can be matched with one of the general

control frameworks implemented in AGE.  We currently have two such

frameworks, the Blackboard model and the Backchain Rule model.

If the domain fits one of these frameworks then AGE can provide the

control structure for the user's system.  As mentioned above,  we feel

that the overall spill management problem and the sub-problem of source

location can be handled very nicely by the Blackboard framework, using

a mixed   event-driven and expectation-driven control.

 	We have implemented in AGE a number of interface features to help 

the user specify his domain knowledge, such as a design subsystem,

acquisition functions, and an interface with the UNITS package,

for representing static knowledge.  We also provide syntax

checking and debugging facilities.  Such aids are indispensible in building

large and complex systems like the spill manager .

	AGE also has the facility to add time markers to any

value in the hypothesis structure, and to save old values and times.

This is another important feature for the spill manager system, particularly

in report generation and in complying with government regulations.



!
Weaknesses

	For problems requiring a control structure very different

from the frameworks provided in AGE, clearly AGE should not be used.

	Another weakness became apparent while working with the

Spill problem.  The problem is the need for actions on the 

right hand sides of rules which do not make changes to the

blackboard.  The Spill problem requires frequent PRINT actions,

to print warnings and suggestions.  This weakness of AGE

has been previously pointed out by  several of our users and we

are changing AGE to add this capability.


!
New Perceptions



	One thing that became very obvious during the course

of this experiment is the difference in emphasis by the various

tools on representation.  AGE clearly differentiates between

control information, procedural domain information, static domain

information, and the evolving solution.  Other teams seem to have done 

a lot of thinking about representing assertional data in the same

way as procedural knowledge and even in the same way as control

information.





←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←












                 FACTS FOR SYSTEM COMPARISON--AGE



                          Nelleke Aiello

                        Stanford University

                          September 1980

!
*****************************************************************************

M6-3 feeds into M6-2.

This is a piece of static information.  AGE allows the user to
represent static information about the domain in any suitable format.
In this case, one possibility is to use property lists to store
information about the drainage basin. 

M6-3 might have the following property list;

(NEXTDOWNSTREAM M6-2 NEXTUPSTREAM ( B3518 ) TYPE SINGLE DIFFICULTTOCHECK?
NIL DIFFICULTTOFIND? NIL)

******************************************************************************

All permanent storage tanks are diked.

This is another piece of static knowledge about the spill domain.  Since 
this fact is inherited by all instances of permanent storage tanks, UNITS
might be a good representation to use.  (AGE provides an interface with
UNITS.)

PERMANENT-STORAGE-TANK
	TYPE--(role S datatype ATOM value PERMANENT)
	CONTAINMENT--(role  S   datatype ATOM value DIKED)
	MEMBERS--(STORAGE-TANK10 ...)
	LOCATION--(role U datatype PAIR)

*****************************************************************************

There's a tank outside building 3035 with a dike around it.

STORAGE-TANK10
	MEMB/OF--PERMANENT-STORAGE-TANK
	LOCATION--(OUTSIDE B3035)
	TYPE--PERMANENT
  	CONTAINMENT--DIKED

!******************************************************************************

Oil spilling into water causes a sheen.

This is represented as a rule in AGE.  This information is useful in
determining that a spilled material is oil and would probably	be
assertained during the initial questioning of the observer.  

if	($VALUE 'DISCOVERER DESCRIPTION LATEST) = 'SHEEN

then	(PROPOSE ch.type MODIFY
		 hypo-element 'DISCOVERER
		 attr-value (INITIAL-ID 'OIL)  
		 link-node NIL
		 support DISCOVERY
		 ev.type QA3       
		 comment If a sheen is observed, then oil may have been spilled.


*****************************************************************************

The types of countermeasures taken are a boom at Wier1 and a skimmer at
Wier2.

This information is part of the evolving solution of the spill problem.
In AGE this kind of changing information is stored on the "blackboard".
The blackboard, or hypothesis structure, consists of levels of structure
defined by the user during the design phase.   While the system
is running an actual spill dialogue, new nodes are created at each
level as needed.

levelX--COUNTERMEASURE
		NUM-2
		LLIST--(COUNTERMEASURE COUNTERMEASURE1 COUNTERMEASURE2) 
		LOCATION--
		TYPE--
		METHOD--
		
COUNTERMEASURE1			    COUNTERMEASURE2
	LOCATION--WIER1			    LOCATION--WIER2
	TYPE--CONTAINMENT		    TYPE--CONTAINMENT
	METHOD--BOOM   			    METHOD--SKIMMER


!*****************************************************************************

Water flows through a pipe at .5 ft / sec.

Use property lists or UNITS

WATER-FLOW-PIPE			    WATER-FLOW-CREEK
	LOCATION--DRAINAGE-BASIN	     .
	RATE-- .5			     .
	UNIT-- FEET/SECOND   		     .


*****************************************************************************

If the chemical is HF, then tell the observer not to breathe it.

This rule would be invoke during the initial dialogue with the observer.

if	($VALUE 'MATERIAL-ID CHEMICAL LATEST) = 'HF
	OR
	($VALUE 'DISCOVERER INITIAL-ID LATEST) = 'HF

then	(PRINT  pr.type PRINTOUT
      		output "*** DO NOT BREATHE--DANGEROUS FUMES. ***"
		file  TTY:)

*****************************************************************************

When attempting to find the source of a spill in the creek, look for the
chemical in the manhole nearest the creek.

The following more general rule would handle this situation.

if  	OBSLOC = OBSVIS
	DONTKNOWSOURCE
	NEXTLOCS ← (NEXTCHECKPTS OBSVIS)

then	(PRINT pr.type PRINTOUT
	       output "Check the following location(s) " NEXTLOCS
	       file TTY:)

	(PROPOSE ch.type MODIFY
		 hypo-element OBSERVATION-LOC
		 attr-value (SUGGESTED-LOCS NEXTLOCS)
		 link-node NIL
		 support BACKTRACK-MONITOR
		 ev.type WAIT
		 comment If the conditions for backtracking exist, ie.
			a visible or measurable trail, then backtrack
			to the next upstream check pts.

This rule would also stop backtracking if the chemical was no longer
"visible".

!*****************************************************************************

If a flammable liquid is spilled, call the fire dept.

if ($VALUE 'DISCOVERER INITIAL-ID LATEST) = 'GASOLINE
    OR
   ($VALUE 'MATERIAL-HAZARD FIRE) =  T

then  (PRINT pr.type PRINTOUT
	     output "***CALL THE FIRE DEPARTMENT***"
	     file TTY:)

*****************************************************************************

If two people's description of a spill are inconsistent, prefer the second
description.

In AGE both descriptions would be saved on the blackboard, and any rule
could reference the second description by specifiying LATEST to the
access function.

($VALUE 'DISCOVERER DESCRIPTION LATEST)

*****************************************************************************

Time  

In AGE a time marker can be added to each item on the blackboard,
indicating the exact time that piece of information was written.  This
information is indispensible when generating spill reports.

FRESH-WATER-VIOL
     AVG-EFFLUENT-DISCHARGE  (X . 12:01:30-09:17:80) (Y . 09:30:00-09:17:80)

*****************************************************************************
    




-------
Mail-from: ARPANET site USC-ISIB rcvd at 24-Sep-80 1844-PDT
Date: 24 Sep 1980 1618-PDT
From: Phil London  <PLONDON at USC-ISIB>
Subject: Our facts, report to follow soon!
To: KEGroupees: ;

                  HEARSAY-III REPRESENTATIONS OF SPILLS FACTS


                           Lee Erman and Phil London
                      USC/Information Sciences Institute
                              September 23, 1980


1:  M6-3 feeds into M6-2.

The drainage system is represented on Hearsay-III's blackboard data structure
as a network of units (objects) of types ManHole, OutFall, Sump, Inlet, and
Juncture.  Each of these types is a subtype of the type DrainageNode.  Each
DrainageNode, except for OutFalls, has a unique "downhill" DrainageNode; this
is represented with the Hearsay role "Pipe".  Thus:
         (ROLE-OF Pipe M6-3 M6-2).

2:  All permanent storage tanks are diked.

This might be (1) encoded directly within one or more knowledge sources (KSs)
or (2) encoded explicitly in the AP3 database.  If encoded in AP3, there are
two sub-options:  (a) as an inference rule and (b) in the type structure.
   (1)  One option is to encode such a fact directly within one or more KSs.
   Hearsay takes no position on the style or structure of intra-KS encodings;
   the KS designer is free to use the AP3 database, but may also use any other
   Lisp structures, both data and procedural.

   (2a)  One relevant type structure for representing the fact is to have types
   DikedObject, StorageTank, and PermanentStorageTank and define them such that
   PermanentStorageTank is a subtype of both StorageTank and DikedObject.
   Then, each PermanentStorageTank is "automatically" an instance of
   DikedObject.

   (2b)  If we have StorageTank objects and unary relations on them (and on
   other objects, perhaps) of PERMANENT and DIKED, the inference rule would be:
            INFERENCE:  (PERMANENT StorageTank) -> (DIKED StorageTank)
   Since AP3 always expands inference rules in the forward direction, the
   effect is that any assertion into the database of the antecedent will also
   result in the assertion of the corresponding consequent.
The database provides a globally accessible representation of Fact 2; encoding
within a KS is private (unless two or more KSs cooperate in an ad hoc manner).

3:  Oil spilling into water causes a sheen.
We are not sure of the meaning of "spilling", so we rewrite the fact as:
3':  The presence of oil in water in a vessel causes a sheen.
   (a)  As an inference rule, this could be written as:
              INFERENCE:   (and (CONTAINS Vessel 'WATER)
                                  (CONTAINS Vessel 'OIL))
                        -> (SHEEN-ON-CONTENTS Vessel)

   (b)  It could also be encoded procedurally within a KS.
4:  The types of countermeasures taken are a boom at Wier1 and skimmer at
Wier2.
Again, we are confused by the wording, and so rewrite the fact as:
4':  These countermeasures have been taken: a boom at Wier1 and a skimmer at
Wier2.

These would be represented with units on the blackboard.  The structure for the
first countermeasure would be:

                 ---------------------------
                 | unit:countermeasure-103 |
                 ---------------------------
                      /     |         \
                     /      |          \
       who-decided  /  ...  |time-      \  measure
                   /        |taken       \
                  /         |             \
                 /          |              \
             --------   ---------         ------------------
             |      |   |       |         |  unit:boom-61  |
             --------   ---------         ------------------
                                                  |
                                                  |loc
                                                  |
                                           --------------
                                           | unit:WEIR1 |
                                           --------------


In this diagram, circles represent units (objects on the Hearsay-III
blackboard), labelled arcs are Hearsay-III roles.

5:  Oil sometimes spills out of broken machinery.

This fact would be represented by two KSs:
  KS1:  This triggers on the assertion on the blackboard of an instance of
  broken machinery at some location.  Its action is to increase the predicted
  likelihood of that location being a source of oil pollution.

  KS2:  This triggers when the location of an oil pollution source has been
  narrowed to a relatively small area but the source itself has not been found.
  Its action is to request information about possible broken machinery in the
  area.  The trigger pattern can be implemented using the
  Likelihood-Of-Pollutant-Source attribute on each DrainageNode; this attribute
  summarizes what various KSs have indicated about the probability of the
  pollutant source being upstream of the node.  The measure is modified by such
  things as observations and inventory reports.  The trigger pattern of KS2
  would be matched by a DrainageNode at which oil has been observed and which
  has no upstream Nodes with significant Likelihood-Of-Pollutant-Source.  The
  action of the KS might be to make the direct external request for information
  about broken machinery.  Or it might instead just post such a goal on the
  blackboard, on the assumption that there are other KSs that can react to the
  request appropriately (e.g., one might ask for human observations, another KS
  might request a report from building supervisors, while yet another might
  attempt to access some remote, expensive database).  In any case, if a report
  of broken machinery does eventually arrive, presumably in reaction to one of
  these actions, KS1 will trigger on it and modify the likelihoods
  appropriately.
6:  Many inventory lists are incomplete.

This "fact" would be reflected in the policy of not making any absolute
decisions based on the inventory lists.  In particular, we do not rule out some
site as a source because of the absence of the substance from the inventory's
listing for the site; rather, the value of the site's
Likelihood-Of-Pollutant-Source attribute is reduced.

7:  Water flows through a pipe at .5 ft/sec.

This would most likely be procedurally encoded within relevant KSs that perform
flow calculations.

8:  If the chemical is HF, then tell the observer not to breathe it.

This function would be handled by a combination of two KSs that together handle
much more general cases:
   KS1:  This triggers on the pollutant being identified.  Its action is to
   look it up in OHMTADS and put on the blackboard its important
   characteristics.

   KS2:  This triggers on the discovery of a caustic and volatile pollutant.
   Its action is to put out an appropriate warning.
9:  When attempting to find the source of a spill in the creek, look for the
chemical [first] in the manhole nearest the creek.

We believe this to be a poor rule:  We have an evaluation function that is used
to select the next observation to be made; it is sensitive to the expected
information gain and the cost of the observation.  This policy is implemented
by one KS which encodes the evaluation function in its action and triggers on
the change of any relevant input, and a second KS which selects the next
observation to be made when an assignable observer is available.

10:  For oil spills, if there are limited human resources, do the containment
before locating the source.

This heuristic would be encoded within the action of a scheduling KS that
arbitrates between KSs that do containment and KSs that do search.

11:  If the spill is gushing, locate the source before trying to contain it.

This would either be combined within the same KS described in Fact 10, or it
could be in a separate scheduling KS, with each of these competing heuristics
modifying the priorities of the search and containment activities.

12:  If a flammable liquid is spilled, call the fire department.

This would be a KS similar to KS2 of Fact 8.
-------
∂24-Sep-80  2248	<PLONDON at USC-ISIB>: Our facts, report to follow soon!

                  HEARSAY-III REPRESENTATIONS OF SPILLS FACTS


                           Lee Erman and Phil London
                      USC/Information Sciences Institute
                              September 23, 1980


1:  M6-3 feeds into M6-2.

The drainage system is represented on Hearsay-III's blackboard data structure
as a network of units (objects) of types ManHole, OutFall, Sump, Inlet, and
Juncture.  Each of these types is a subtype of the type DrainageNode.  Each
DrainageNode, except for OutFalls, has a unique "downhill" DrainageNode; this
is represented with the Hearsay role "Pipe".  Thus:
         (ROLE-OF Pipe M6-3 M6-2).

2:  All permanent storage tanks are diked.

This might be (1) encoded directly within one or more knowledge sources (KSs)
or (2) encoded explicitly in the AP3 database.  If encoded in AP3, there are
two sub-options:  (a) as an inference rule and (b) in the type structure.
   (1)  One option is to encode such a fact directly within one or more KSs.
   Hearsay takes no position on the style or structure of intra-KS encodings;
   the KS designer is free to use the AP3 database, but may also use any other
   Lisp structures, both data and procedural.

   (2a)  One relevant type structure for representing the fact is to have types
   DikedObject, StorageTank, and PermanentStorageTank and define them such that
   PermanentStorageTank is a subtype of both StorageTank and DikedObject.
   Then, each PermanentStorageTank is "automatically" an instance of
   DikedObject.

   (2b)  If we have StorageTank objects and unary relations on them (and on
   other objects, perhaps) of PERMANENT and DIKED, the inference rule would be:
            INFERENCE:  (PERMANENT StorageTank) -> (DIKED StorageTank)
   Since AP3 always expands inference rules in the forward direction, the
   effect is that any assertion into the database of the antecedent will also
   result in the assertion of the corresponding consequent.
The database provides a globally accessible representation of Fact 2; encoding
within a KS is private (unless two or more KSs cooperate in an ad hoc manner).

3:  Oil spilling into water causes a sheen.
We are not sure of the meaning of "spilling", so we rewrite the fact as:
3':  The presence of oil in water in a vessel causes a sheen.
   (a)  As an inference rule, this could be written as:
              INFERENCE:   (and (CONTAINS Vessel 'WATER)
                                  (CONTAINS Vessel 'OIL))
                        -> (SHEEN-ON-CONTENTS Vessel)

   (b)  It could also be encoded procedurally within a KS.
4:  The types of countermeasures taken are a boom at Wier1 and skimmer at
Wier2.
Again, we are confused by the wording, and so rewrite the fact as:
4':  These countermeasures have been taken: a boom at Wier1 and a skimmer at
Wier2.

These would be represented with units on the blackboard.  The structure for the
first countermeasure would be:

                 ---------------------------
                 | unit:countermeasure-103 |
                 ---------------------------
                      /     |         \
                     /      |          \
       who-decided  /  ...  |time-      \  measure
                   /        |taken       \
                  /         |             \
                 /          |              \
             --------   ---------         ------------------
             |      |   |       |         |  unit:boom-61  |
             --------   ---------         ------------------
                                                  |
                                                  |loc
                                                  |
                                           --------------
                                           | unit:WEIR1 |
                                           --------------


In this diagram, circles represent units (objects on the Hearsay-III
blackboard), labelled arcs are Hearsay-III roles.

5:  Oil sometimes spills out of broken machinery.

This fact would be represented by two KSs:
  KS1:  This triggers on the assertion on the blackboard of an instance of
  broken machinery at some location.  Its action is to increase the predicted
  likelihood of that location being a source of oil pollution.

  KS2:  This triggers when the location of an oil pollution source has been
  narrowed to a relatively small area but the source itself has not been found.
  Its action is to request information about possible broken machinery in the
  area.  The trigger pattern can be implemented using the
  Likelihood-Of-Pollutant-Source attribute on each DrainageNode; this attribute
  summarizes what various KSs have indicated about the probability of the
  pollutant source being upstream of the node.  The measure is modified by such
  things as observations and inventory reports.  The trigger pattern of KS2
  would be matched by a DrainageNode at which oil has been observed and which
  has no upstream Nodes with significant Likelihood-Of-Pollutant-Source.  The
  action of the KS might be to make the direct external request for information
  about broken machinery.  Or it might instead just post such a goal on the
  blackboard, on the assumption that there are other KSs that can react to the
  request appropriately (e.g., one might ask for human observations, another KS
  might request a report from building supervisors, while yet another might
  attempt to access some remote, expensive database).  In any case, if a report
  of broken machinery does eventually arrive, presumably in reaction to one of
  these actions, KS1 will trigger on it and modify the likelihoods
  appropriately.
6:  Many inventory lists are incomplete.

This "fact" would be reflected in the policy of not making any absolute
decisions based on the inventory lists.  In particular, we do not rule out some
site as a source because of the absence of the substance from the inventory's
listing for the site; rather, the value of the site's
Likelihood-Of-Pollutant-Source attribute is reduced.

7:  Water flows through a pipe at .5 ft/sec.

This would most likely be procedurally encoded within relevant KSs that perform
flow calculations.

8:  If the chemical is HF, then tell the observer not to breathe it.

This function would be handled by a combination of two KSs that together handle
much more general cases:
   KS1:  This triggers on the pollutant being identified.  Its action is to
   look it up in OHMTADS and put on the blackboard its important
   characteristics.

   KS2:  This triggers on the discovery of a caustic and volatile pollutant.
   Its action is to put out an appropriate warning.
9:  When attempting to find the source of a spill in the creek, look for the
chemical [first] in the manhole nearest the creek.

We believe this to be a poor rule:  We have an evaluation function that is used
to select the next observation to be made; it is sensitive to the expected
information gain and the cost of the observation.  This policy is implemented
by one KS which encodes the evaluation function in its action and triggers on
the change of any relevant input, and a second KS which selects the next
observation to be made when an assignable observer is available.

10:  For oil spills, if there are limited human resources, do the containment
before locating the source.

This heuristic would be encoded within the action of a scheduling KS that
arbitrates between KSs that do containment and KSs that do search.

11:  If the spill is gushing, locate the source before trying to contain it.

This would either be combined within the same KS described in Fact 10, or it
could be in a separate scheduling KS, with each of these competing heuristics
modifying the priorities of the search and containment activities.

12:  If a flammable liquid is spilled, call the fire department.

This would be a KS similar to KS2 of Fact 8.
-------

∂24-Sep-80  2153	John.McDermott at CMU-1  report on experiment: ops5
To:  ESKErs: 


                          REPORT ON EXPERIMENT:  OPS5

                                (DRAFT VERSION)

                           C. Forgy and J. McDermott

                               20 September 1980

1. Parts of the problem attempted
  We  decided  to  focus  on  two  aspects of the spill problem:  the suggested
subproblem of locating the source of the contaminant and, the problem that most
interested us, the overall organization of the system.  We were  interested  in
the  overall organization because it presented control issues that we had never
faced before -- co-ordinating a number of asychronous subtasks (human agents in
this case) where the co-ordinator  has  only  limited  control  over  when  the
information it needs will become available.  In performing this task, a program
would  have  to  issue  commands  to  the  agents, wait until one of the agents
responded, process any information  that  agent  had,  and  perhaps  issue  new
commands  to one or more of the agents.  The program would have no control over
the order in which the agents reported back;  it  would  have  to  be  able  to
process any information about any aspect of the problem at any time.

  To  work  on  this  multi-subtask  problem,  we  of course had to provide the
program with at least rudimentary abilities in several areas.  In most of these
areas the program has only three or four productions; thus the  knowledge  that
the program has is just enough to enable it to do something not too implausible
in  each  of  these  areas.    The  subtask  that it handles more adequately is
locating the source of the spill.  It can direct an agent up the creek and then
from  manhole  to  manhole  in  the  drain  system  looking  for  the   highest
contaminated  point  in  the  system.    If  that  point is immediately below a
building, the agent is told to look in the building  for  a  leaking  container
(the name or class of the material in the container and the minimum size of the
container  are indicated if known -- i.e., if someone has told the system).  If
the point is not immediately below a building, the agent is told to search  the
area around the manhole.

  Although  the  system  that  we  actually  implemented  has extremely limited
capabilities at the moment, our design of the system took into account the fact
that much of the data that the system has to work with  is  unreliable  --  eg,
descriptions  of  the same spill may differ across observers, information about
the number,  kinds,  and  fullness  of  containers  in  the  buildings  may  be
incorrect, information about potential spill cites is almost surely incomplete,
the  identity  of  the  substance  that  spilled  is  ordinarily not known with
certainty.  Our design took this unreliability into account both at  the  level
of  individual  assertions  and  at  the  level of the control structure of the
program.  We also took into account the fact that at different times and  under
different  circumstances,  a  varying  number of agents would be available.  To
simplify our design task, we assumed that at any given time, the  system  would
be  dealing with a single spill.  Also, we did not attempt to address the issue
of how to provide the system with a useful sense of time; it has no  notion  of
how  long  particular  instructions that it might give should take to complete,
nor does it have a sense of when  to  initiate  new  activities  based  on  the
passing  of  too  much time.  Though it is extremely important in the case of a
system like SPILL to make the system easy to interact with,  our  focus  during
the  few  days  we  worked on the program was on issues of system organization,
rather than on user interface issues.

  In the simple, sample interaction shown in Appendix 2, SPILL  interviews  the
user,  is told that three agents are available to work on the task, assigns one
of them to find the head of the spill, another to determine  which  outflow  is
the  source  of  the  contaminant,  and  the  third  to collect a sample of the
contaminant and have it analyzed.  When the second agent reports  that  outfall
WOC-6  is  the  source  of  the contaminant, SPILL tells him to check M6-1 (the
first manhole above the  outfall).    When  the  agent  reports  that  M6-1  is
contaminated, SPILL directs him to check M6-2.  When the agent reports that M6-
2  is contaminated, SPILL tells him to check M6-4.  When the agent reports that
M6-4 is not contaminated, SPILL tells him to check M6-3 (the  only  manhole  on
the  other  branch  leading  to  M6-2).    When  the agent reports that M6-3 is
contaminated, SPILL concludes that the source of the contaminant is  likely  to
be building 3518.

2. The organization of the program
  SPILL,  like most production systems, makes extensive use of "goals".  A goal
is an element in working memory that designates a task to be  performed.    The
collection  of  productions  that  are  sensitive  to the goal for a given task
compose what is called a "method" for that task.  Thus the productions in SPILL
that perform the storm drain backtracking (the productions which are  sensitive
to  the  goal  symbol "TRACE-BACK") compose one method and the productions that
request an  analysis  of  the  spilled  material  (the  productions  which  are
sensitive to the goal symbol "ANALYZE-MATERIAL") compose another.

  Methods  and  goals  are  not  features of the OPS5 language, they are simply
organizing principles that we as programmers find  convenient  to  use.    When
methods  are  used,  productions can perform tasks directly, or they can create
goals which ask other productions to perform the tasks.   Creating  a  goal  is
analogous  to  calling  a subroutine or evoking a knowledge source.  Thus goals
and methods are what allow us to control the granularity at which  we  work  at
any  given  time;  when  we  want  to  work  with  high  level issues, we write
productions that create and manipulate high-level subgoals.

  SPILL has 62 rules distributed among 12 methods.  All interaction with  SPILL
is  handled by the INTERRUPT method; this method activates either the INTERVIEW
method (if the user wants to enter information about the spill) or the INTERACT
method (if an agent is responding  to  a  request  for  information  previously
generated  by SPILL).  Once SPILL has has been given the initial description of
a spill, the INTERVIEW method generates the CO-ORDINATE goal.  The  CO-ORDINATE
method  contains all of the knowledge that SPILL has about when to initiate (or
retry) the various subtasks that comprise the spill containment task.   It  can
generate  four  different goals: ALLOCATE, INTERVIEW, CHARACTERIZE-CONTAINMENT,
and CHARACTERIZE-SOURCE.  The ALLOCATE method asks the user for  the  names  of
(additional)  agents  that  can be assigned to the tasks; it notes that each of
these agents is unassigned.  The  INTERVIEW  method  generates  a  request  for
information from an agent assigned to verify a spill report.  The CHARACTERIZE-
CONTAINMENT  method  tells an agent to find the head of the spill; depending on
whether or not the spill can be completely contained,  the  agent  is  told  to
contain  it  or to warn downstream communities.  The CHARACTERIZE-SOURCE method
can generate two goals:   FIND-SOURCE  and  ANALYZE-MATERIAL.    All  that  the
ANALYZE-MATERIAL method does is ask an agent to analyze a sample of the spilled
material.

  A  significant part of the limited knowledge that SPILL has of the spill task
is found in its FIND-SOURCE  method  and  in  FIND-SOURCE's  submethods:  FIND-
OUTFALL, TRACE-BACK, and MATCH-MATERIAL.  If the spill is detected in White Oak
Creek,  the  agent  assigned to finding the source (at present SPILL can handle
only one agent on this task) is sent to the point where the spill was  detected
and told to walk up White Oak Creek and examine the outfalls until he discovers
which  one is releasing the contaminant (FIND-OUTFALL).  The program attends to
other things while the agent is performing the search.  When the agent  locates
the  outfall  he interrupts the program and tells it which one is contaminated.
The program dumps into its working memory a topological map of the  appropriate
drainage  basin  so  that it can direct the agent more closely from that point.
It sends him to the manhole closest to the outfall and asks him  to  check  for
contamination  there.   Again the program attends to other things while waiting
for the agent.  The agent checks the manhole and reports to  the  program,  and
the  program  sends him to another manhole.  Since the drainage basin is in the
form of a tree, the program has to decide what order to  search  the  manholes.
SPILL  contains  five  productions  for this task which implement the following
strategy:

   - If the manhole is contaminated, send the agent to the next manhole up
     the tree.  If there are multiple  manholes  above  the  current  one,
     choose one based on a predetermined order.

   - If the manhole is not contaminated and the manhole immediately before
     the  current  one had more than one above it, chose one of the others
     to examine.

   - If the manhole is not contaminated and the manhole immediately before
     the current one had no other predecessors (or  in  the  more  general
     case,  no  other  predecessors that have not been examined) then stop
     the backtracking.

The same five productions are used for all drainage basins; they work from  the
topological  maps  of the basins (TRACE-BACK).  Once the back edge of the spill
has been found, SPILL uses whatever information it has  been  given  about  the
spilled  material (name, type, or volume detected) to narrow down the area that
needs to be searched for a damaged container (MATCH-MATERIAL).

  During the initial part of the task only one goal is active at a time.   When
a  production  asserts  a  subgoal  (indicates  that it is time to work on some
subtask) the goal that was being worked on is marked as de-activated.   When  a
subtask  finishes,  its  goal is de-activated and the goal above it in the goal
tree is re-activated.  Since the program can process  information  much  faster
than the agents can collect it, it is frequently the case that subtasks have to
be  suspended  until the information they need can be collected.  When an agent
is assigned to collect some information, if the subtask cannot proceed  without
the  requested  information,  the next higher goal is re-activated.  The higher
goal can attempt other subtasks or pass control up to  the  next  higher  goal.
When an agent has collected a piece of information requested by SPILL, SPILL is
interrupted  and  the information typed in.  The arrival of the new information
causes the subtask that requested it to be re-activated.  If  this  causes  two
goals  to be active simultaneously (which is unlikely given the relative speeds
of SPILL and the agents) the method that  is  processing  the  new  information
dominates.    The  other  method  will  not  be able to proceed until the newly
activated method and the methods for the goals above it in the goal  tree  have
finished  processing.  Typically each interaction with the agent (accepting his
report, processing it, and generating the next order) involves the firing of  6
or 7 productions and requires about 200 milliseconds of CPU time on a KL-10.

  Though  SPILL has only a limited amount of knowledge about the spill task, it
was designed in such a way that  adding  additional  domain-specific  knowledge
would  be  straightforward.   Some of the more obvious extensions that could be
incorporated into the existing framework are  the  following:  (1)  In  ASSIGN,
rules could be added that could recognize the need for particular materials for
particular  tasks.    And  rules  could be added that would be sensitive to the
relative locations of agents, materials, and work to be done.  In CHARACTERIZE-
CONTAINMENT, rules could be added that  would  recognize  situations  in  which
intermediate  containment  is  called  for  and  suggest suitable measures.  In
CHARACTERIZE-SOURCE, rules could be  added  that  would  examine  a  data  base
containing  descriptions  of  the  contents  of  the  various  buildings in the
possible spill area to  narrow  down  the  area  that  needs  to  be  searched.
Finally,  the  CO-ORDINATE  method  could  be extended to take advantage of its
"central location" as an information clearing house.  It could have rules  that
would  enable  it  to  determine  when  to  reactivate  methods  to process new
information.

3. The methodology used
  The methodology that we used to develop SPILL is similar to  the  methodology
that  we  have  used in developing other expert systems; the main difference is
that due to the severe time constraints imposed by the Workshop structure, work
that we would ordinarily have spent days or weeks on had to be compressed  into
a  few  hours.    In general our approach is to spend several weeks acquainting
ourselves with the domain; during this time, as the structure becomes apparent,
we design a skeleton system -- ie, a system that lacks most  of  the  knowledge
necessary  for  expert performance in the domain, but one that approaches tasks
in the domain in a plausible fashion.  Given this initial,  uninformed  system,
we then pose tasks for it, determine by interacting with experts what knowledge
the  system  needs  to make its performance acceptable, and then add rules that
contain this  knowledge.    This  refinement  process  takes  weeks  or  months
depending on the complexity of the domain.

  The fact that SPILL had to be developed during a three day period required us
to  deviate  somewhat  from our ordinary approach to developing expert systems.
It is possible that the artificial time constraints resulted in our approaching
the task in a very different way than we would have  had  there  been  no  time
constraints.    We  doubt  that  this is the case, however; we suspect that the
artificiality of the time constraints was offset by the fact that much  of  the
work  that  we  would  ordinarily  do  in  developing an expert system was to a
considerable extent done for us by our experts.  The description  of  the  task
domain  that was given to us by Carrol Johnson and Sara Jordan provided a quite
complete picture of the structure of the domain.  Thus, all that was  necessary
in  order  to  design  an  initial version of SPILL was for us to decide how to
represent this task structure with rules.  Moreover, once  an  initial  set  of
skeleton  rules were written, it was quite easy to spot inadequacies in SPILL's
performance since the description  of  the  task  contained  a  great  deal  of
information  about  how  the  task  should  be performed.  Saturday evening and
Sunday morning were devoted to  understanding  the  problem.    Design  of  the
program itself did not begin until Sunday afternoon.  At that time we discussed
the  organization  of the system and its control structure for about two hours.
We spent another hour designing the data structures.  We  were  then  ready  to
begin  the  actual  coding.  Only a few rules were written Sunday.  Most of our
rule-writing took place on Monday afternoon and on Tuesday.    During  this  12
hour  period  McDermott  wrote  about  40 rules; Forgy wrote about 20 rules and
debugged a subset of about 36 rules that perform the source  location  subtask.
The  full  set  of  rules has not yet been debugged.  Only a few of the 24 man-
hours that we spent in writing 62 rules and debugging  a  set  of  36  involved
adding  knowledge  to  make  the system's performance more expert.  Most of the
time was spent in modifying and refining the system design.

4. The strengths of OPS
  Perhaps the single most important characteristic  of  OPS5  is  its  powerful
pattern  matching ability.  It incorporates a sub-language for writing patterns
which allows one to write quite complex  patterns,  and  it  has  an  efficient
pattern  matcher,  so  the  complex  patterns can be used in fact as well as in
principle.  One obvious benefit of having such abilities is that  the  user  is
able  (and  even encouraged) to place a major part of the burden of solving his
problem on the pattern matcher.  The R1 system is a good example.  Even  though
R1  solves  a  difficult problem, it solves it with almost no search.  With the
complex patterns its rules incorporate, it is  able  simply  to  recognize  the
correct steps to take at all but one stage in the process.

  Powerful  pattern  matching  abilities are especially important in a general-
purpose production system language like OPS5 because pattern  matching  is  the
basis  for  building  control  constructs  and  data  representations.    If  a
production system is to incorporate  some  control  construct,  the  individual
rules in the system must be able to recognize their parts in the construct.  If
a  production system is to be able to process some kind of new data object, the
rules must be able to recognize the object and its component parts.

  The fact that OPS5 is a general-purpose language makes it easy to tailor  the
design  of each expert system to fit the characteristics of its domain.  With a
general  purpose  tool,  different   kinds   of   data   can   have   different
representations,  different  sections  of  a  system can have different control
structures, and if it proves desirable, different sections of a system can even
have different organizing principles.   Moreover,  OPS5  does  not  hinder  the
refinement  process.    Surely  among  the  things a human learns on his way to
becoming an expert are new ways to organize his methods, new  ways  to  perform
old  tasks,  and  new  ways  to  represent information.  Since an expert system
typically acquires the bulk of its knowledge after the  first  version  of  the
system  is  implemented,  it should have the ability to do the same things.  If
problem solving strategies or system organizations are built  into  the  tools,
when  the  need  for  a  new strategy or organization arises, one will have the
unpleasant choice of ignoring the demands of the problem or finding  a  way  to
circumvent the features of the tools.

5. The weaknesses of OPS
  The  deficiency  of  OPS5  that  we  feel  most acutely is the lack of a good
programming environment.   While  OPS5  does  have  some  debugging  aids,  the
programming  environment  it provides is far inferior to that of say LISP.  The
normal mode of  debugging  a  small  system  like  the  one  written  for  this
experiment is to run it until a bug is identified, then to run a text editor to
fix the bug, and then finally to recompile and start the run again.  We plan to
do  a  considerable  amount  of work on OPS over the next two years in order to
remedy  this  defect.    Ideally,  we  would  like  to  make  this  programming
environment  such  that  an  unsophisticated  user, communicating with OPS in a
subset of natural language, could enter rules, test his  system's  performance,
and  then  add or modify rules as required.  But since OPS is a general-purpose
language, it puts a very heavy design load on the user.  It is not clear to  us
at  the  moment,  how to preserve the general-purpose character of the language
while at the same time making it a suitable tool for an unsophisticated user.

6. New perceptions
  At CMU we have been using production systems  like  OPS5  for  six  to  eight
years.    We  have  been using OPS5 for almost a year.  It did not surprise us,
therefore, that we came up with no new perceptions about OPS5 in the three days
of the experiment.

Appendix 1: Two of SPILL's rules
(P  FIND-SOURCE-1
  (GOAL  ↑STATUS ACTIVE  ↑NAME FIND-SOURCE  ↑ID <ID>)
  (SOURCE  ↑LOCATION NIL)
  (FLOW  ↑KNOWN-BACK-REGION STORM-DRAIN)
  - (GOAL  ↑NAME TRACE-BACK  ↑PARENT-ID <ID>)
  -->
  (MODIFY 1  ↑STATUS PENDING)
  (MAKE  GOAL  ↑ID (GINT)  ↑STATUS ACTIVE  ↑NAME TRACE-BACK  ↑PARENT-ID <ID>))


(P  INTERRUPT-1
  (RESPONSE  ↑NUMBER <N>)
  (SUGGESTION  ↑STATUS PENDING  ↑NUMBER <N>  ↑ID <ID>)
  (GOAL  ↑STATUS IO-WAIT  ↑ID <ID>  ↑NAME INTERACT)
  -->
  (MODIFY 3  ↑STATUS ACTIVE))

Appendix 2: A sample interaction with SPILL
 IF THE ANSWER TO A QUESTION IS UNKNOWN ENTER NIL

 ENTER THE NAME OF THE PERSON WHO REPORTED THE SPILL AND THE DATE:
* cooper
* 18-sept-80

 ENTER THE TIME AT WHICH THE SPILL WAS REPORTED BY COOPER
* 20:30

 IS THE PERSON WHO REPORTED THE SPILL A SPILL EXPERT:
* yes

 ENTER THE LOCATION AND LOCATION-TYPE -- CREEK LAKE STORM-DRAIN GROUND --
 WHERE THE SPILL WAS SIGHTED
* weir-1
* creek

 ENTER THE CLASS OF THE MATERIAL SPILLED:
* oil

 ENTER THE ESTIMATED VOLUME OF MATERIAL SPILLED:
* 30

 ENTER THE NAME OF THE MATERIAL SPILLED:
* nil

 ENTER THE HAZARD LEVEL OF THE MATERIAL SPILLED:
* nil

 ENTER THE COLOR OF THE MATERIAL SPILLED:
* black

 ENTER THE NUMBER OF PERSONNEL AVAILABLE TO DEAL WITH THE SPILL:
* 3

 WHAT IS THE NAME OF PERSON 1 :
* smith

 WHAT IS THE NAME OF PERSON 2 :
* jones

 WHAT IS THE NAME OF PERSON 3 :
* larson


 [ 6 ] LARSON : FIND THE HEAD OF THE SPILL

 [ 10 ] JONES : DETERMINE WHICH OUTFLOW UPSTREAM OF WEIR-1 IS THE SOURCE
        OF THE CONTAMINANT

 [ 12 ] SMITH : COLLECT A SAMPLE OF THE CONTAMINANT AND TAKE IT FOR ANALYSIS
NIL
(response 10 woc-6)

 [ 15 ] JONES : DETERMINE WHETHER MANHOLE M6-1 IS CONTAMINATED
NIL
(response 15 yes)

 [ 16 ] JONES : DETERMINE WHETHER MANHOLE M6-2 IS CONTAMINATED
NIL
(response 16 yes)

 [ 17 ] JONES : DETERMINE WHETHER MANHOLE M6-4 IS CONTAMINATED
NIL
(response 17 no)

 [ 18 ] JONES : DETERMINE WHETHER MANHOLE M6-3 IS CONTAMINATED
NIL
(response 18 yes)

 STATUS REPORT: BLDG-3518 MAY CONTAIN THE SOURCE

 [ 20 ] JONES : SEARCH BUILDING BLDG-3518 FOR DAMAGED CONTAINERS OF OIL
        WITH CAPACITY GREATER THAN 30 GALLONS
NIL
                ---------------
-------

∂25-Sep-80  1246	RDG  	Report of the ESW Oil Spill Effort
To:   "@ESKE.DIS[RDG,DBL]" at SU-AI   
RLL: A Representation Language Language

Doug Lenat and Russ Greiner



Unlike most groups,  we (Lenat and  Greiner) focused on  the entire  spill
crisis treatment scenario,  and paid  only slight extra  attention to  the
subproblems of Discovery  (initial intake interview)  and Source  Location
(by backtracking or by indirect  analysis).  In fact, we considered  OTHER
problems, such as locating an escaped convict (where the unwanted material
is spilled onto conduits (roads) and must be located, etc.)  Whenever  any
piece of knowledge was added to RLL, the question we invariably posed was:
can this be generalized  or abstracted in some  way, and still retain  its
potency, its power for constraining search?  Most of the knowledge we have
so far represented within RLL  is common to both  the convict and the  oil
spill problems, and  is represented in  a manner usable  by the system  in
either context.  Of course there are individual differences in technology,
such as road blocks instead of absorbent booms, but those differences  are
at much lower a  level (e.g., terminology)  than most inference  processes
deal with.  This kind of generality is  one of the major powers of RLL  --
and, due  to the  effort required  to  exercise it,  one of  the  greatest
liabilities when a constraint is to have a running system in two days.  As
we hope RLL's mechanisms will eventually be widely used, we are attempting
to enter the information -- whether  data, or constrol structure -- in  as
unbiased and  extensible a  manner  as possible.   We chose  to  sacrifice
"performing a flashy demo" for "representing things the right way"; toward
the end,  we had  to sacrifice  both of  them to  get even  a meager  demo
running.

One of the early  exercises we performed was  to hand-simulate a  dialogue
with the system.  It became  clear that we would  have to choose a  "role"
for the system to play.   We noted that the  greatest need was during  the
night, when nightshift workers who  were ill-equipped to deal with  spills
nevertheless had to. Thus, our model is one where a spill is  encountered,
called in to the  program, and the latter  then directs the activities  of
the discoverer, sends out other teams, notifies various authorities,  etc.
Thus the role is one of REPLACING the expert in this process.  We  believe
that almost all the  information can, however, also  be used for  tutorial
purposes,  for  advising  an  expert,   etc.   This  is  one  reason   for
representing each piece  of knowledge EXPLICITLY,  rather than burying  it
within a piece of code.

As our simulation continued, we observed  that there was frequent need  to
"suspend" one of  the major  tasks we  had begun,  to attend  to some  new
datum, some new conclusion with dramatic consequences ("Don't breathe that
stuff!"), or simply because the current task seemed to be bogging down.

The control structure which this type of interaction suggested (to us)  is
an AGENDA of tasks, very much like the agenda of AM. Each task would  have
some priority rating, and when selected would fire production-like  rules,
until it was satisfied or until its quantum of cpu time expired (in  which
case it would be suspended).  During the firing of a rule, it could direct
the rpogram to add new tasks to the agenda, modify the data base, ask  the
user for some information, tell him some, etc.

While we have worked on RLL for some time, we had not (until this  probem)
implemented this type of control structure; hence our first major task was
to describe it to RLL.  (This meant encoding it  as a collection of  units
including rules, tasks, priorities, special values returnable by rules and
by tasks, etc.)  These new control-related units were entered into one  of
our permanent system knowledge bases (EURISKO), rather than on the new one
we had created for this task (SPILL), because of the future utility of the
agenda mechanism.

The second "lack" we felt in the then-extant RLL system was the notion  of
gradual restriction (corresponding  to the SPEC  relation, defined in  the
MOLGEN UNITS package  [Stefik]).  In  particular, we needed  to deal  with
generic events, whose descendants could become gradually more specialized,
instantiated, particularized.  We  added the units  for events in  general
and  pipe  breaks,  flows,  etc.  in  particular.   We  also  added  units
describing the type of  gradual restriction we  wanted to have  connecting
events.  We  represented  several  kinds of  connections  between  events,
several  kinds  of  slots  that  were  new  to  RLL:    MoreGeneralEvents,
CausesOtherEvents,    CausedBy,    PriorEvents,     MoreSpecializedEvents,
LaterEvents, SimultEvents, etc.

The third thing we noticed  was that RLL had no  notion of a Problem.   It
has previously been used  only on open-ended types  of tasks, never  those
admitting a precise answer or solution.   Units for these concepts had  to
be added.

Finally, we began to enter units for concepts which had at least SOMETHING
to do with  the target  task: liquids, chemicals  (and oils  and acids  in
particular), pH,  flows,  containers,  mixings, etc.   At  this  level  of
abstraction, none of this was specific to the particular problem given.

The incorporation of  the above  units took  two days  of part-time  work;
probably 25 man-hours in  all. (Much additional time  was spent fixing  up
RLL: In addition to fleshing out many skeletons, like the agenda mechanism
mentioned above, there  were a  host of  low level  bugs which  had to  be
fixed.)  Before this task was completed, we had sketched out how we  would
represent such  task-specific  details as  the  White Oak  Creek  drainage
system, the four  major pieces  of legislation which  define the  possible
violations, the particular counter-measures which can be taken to halt the
flow of oil or acid, etc.  Units for some of these have been entered.  The
final type of problem-specific knowledge which we had to enter, to get RLL
"running" was the  set of  rules which manage  the various  phases of  the
spill   crisis   management   problem.    These   ranged   from    trivial
information-requesting rules (If  the discoverer's name  isn't known,  ask
it) to judgmental rules for counter-measures (If the flow is to be stopped
at a Weir, then use a skimmer). Not all of these have been added, and as a
result the "demos" produced by the system are incomplete. Essentially,  we
began entering task rules on Tuesday night -- into a system which was only
then at the stage most other groups had on Saturday.  Because most of  the
preliminary knowledge  was represented  in a  reasonable way,  it will  be
usable in the  future.  It  is important to  realize that  RLL itself  was
alterred.  (We  are NOT  including the  removal of  various bugs  in  this
category.)  In addition to the SPILL-related specific facts just enterred,
RLL  now  better   understands  agndae,  generic   objects,  and   control
mechanisms.  As these will remanin in RLL, it will be considerably  easier
to implement subsequent applications which are "close" to this one.

The details of our small implementation  can best be apprehended from  the
figures, traces,  knowledge bases,  etc.  which accompany  this  document.
Some simple  consultations (dialogues)  have been  run through,  including
directing the  user in  a backtrack  search to  locate the  source of  the
spill.

Note in  particular  the  manner  in  which  one  task  starts  (interview
discoverer) but  spends only  a few  seconds  on it.   Some of  the  rules
associated with acheiving that task are fired (getting the spill type  and
location), but many are not (getting the discoverer's department address).
Of higher priority is a  preliminary identification of the material  which
has  spilled,   and  so   the  Discovery   task  is   suspended  and   the
material-characterization task is chosen to  run.  After a preliminary  ID
is made (oil,  acid, perhaps one  level more detail,  but NOT the  precise
chemical composition or  trade name of  it), that task  too is  suspended.
The highest priority then is  Evaluating potential hazards.  Thus,  within
about 10cpu seconds, RLL has formed  a tentative picture of what  spilled,
where, and how dangerous it is.   Gradually, that picture is fleshed  out,
as more tasks are executed, and as suspended tasks are resumed and  worked
on some more.  The power of the agenda is in allowing any "high  priority"
rule to trigger at almost any time.

The versatility and adaptability of  this agenda mechanism, together  with
later general utility of  the knowledge, are the  major strengths of  this
implementation.  Similar  flexibility can  be found  in the  RLL  language
itself.  To understand its mallability, one  has to consider the range  of
things which the RLL user  may regard as "parameters"  -- i.e. what he  is
allowed to specify, as opposed to finding hardwired in.

Each of the expert building systems has a different idea of what qualifies
as domain specific information (that is, what the user should be  expected
to enter).   For example,  none  of these  ESBSs (expert  system  building
systems) would  be expected  to  know, a  priori,  the specifics  of  this
particular plant, such as  "Pipe90 connects to  Pipe82" or "All  permanent
storage tanks are  diked".  Similarly,  none of these  systems would  have
facts at one  higher level  -- for example,  information about  chemistry,
(e.g. Oil#33 is corrosive) or connectivity, (e.g. that each pipe will flow
into some other pipe, unless it  leaks) - built in.  As such,  information
in both categories would have to be enterred.

RLL goes one step  further, by allowing the  user to specify what  control
regime to  use as  well.  This  does NOT  imply this  information must  be
enterred in  LISP code,  anymore  than the  other facts  (pertaining,  for
example, to acids or  dikes,) had to  be given in so  low level a  manner.
RLL first includes a set of known mechanisms, (eg BackWard Chaining Rules,
or Agenda), from which the user may conveniently select the one he wishes.
In addition, RLL provides a collection of tools, which the user can use to
construct his own new control regimes, if necessary.  These tools describe
the control information in high level, natural terms.

As for the weaknesses, one of the  most obvious ones is the extra cost  of
getting this system running: we  can't assert that Pipe3 flows-into  Pipe4
without first creating a unit for the relation flows-into, explaining that
that isa Slot,  that it is  meaningful for any  two conduits, etc.,  etc.,
etc. One thing that  might be expected  to be a  weakness is the  apparent
inefficiency this  high  degree  of "interpretiveness"  implies.   To  the
contrary, this is one of RLL's big strengths: see [Lenat, Hayes-Roth,  and
Klahr] for  details of  how  caching and  other techniques  recapture  the
efficiency that would otherwise be  lost.  Admittedly, the FIRST time  you
ask RLL to do something, it takes a LONG time, but from then on a  similar
type of request will  return fairly quickly.  One  severe weakness is  the
absence of a front-end; the user  must build his system by editing  units,
rather than the nice  human-engineered dialogue he  can have with  EMYCIN,
e.g.  The final SYSTEM produced, however, can have a simple user interface
(and in fact this is one reason we  had the ROLE of our system be that  of
the expert -- it could have the initiative almost exclusively, and  simply
ask questions of the user).

In this experiment,  we have been  forced to the  realization that, for  a
small amount of time, a simpler language (such as EMYCIN or LISP) is  able
to achieve SOME  results more quickly.   Some of the  goals of RLL,  which
include its aiding of the user in producing an expert system, just  aren't
in existence yet.   The experience  has also  reinforced our  view of  the
process of  building  an  expert  system as  an  incremental  approach  to
competence.  Innumerable times, compromises have to be made, sacrifices of
"the right way"  to the  altar of "getting  started".  We  have honed  our
abilities to make such sacrifices (one of the requisites of a C.K.E.), and
have honed our facilities  to make them  in a way  that does not  preclude
redoing things in a  better way later (another  CKE requisite).  To  close
with one example of this process,  our original design had four  Violation
rules, one for each piece of legislation; later, as we learned more  about
the complexities  of  those  regulations, we  realized  the  necessity  of
replacing those four  rules with four  separate tasks, each  of which  had
several rules attached.   This kind  of flexibility,  which is  admittedly
just beginning in RLL, is the cornerstone of successful KEing.

∂25-Sep-80  1300	RDG  Representing various things in RLL
To:   "@ESKE.DIS[RDG,DBL]" at SU-AI   

	Comments:

There are several obvious problems with  showing the set of symbols  which
we claim "represent"  some fact.   First, these  symbols are  semantically
meaningful only  with  respect  to  some interpreter.   For  a  system  as
versatile as RLL,  the same unit  may have many  "meanings", depending  on
which interpreter is being used --  and this decision is based largely  on
the second  issue:  determining what  problem  or type  of  problem  (e.g.
answering questions, or performing deductions,) we are trying to solve.

(As an extreme example, any of these systems could claim the quoted string
of characters represents each of these  statements; and if their only  use
was as examples of English words, this would be quite adequate.)

In RLL, this is not as much a problem as it is in other systems, as things
like the  interpreter  are  themselves  explicitly  represented.   In  the
examples which follow, I  will try to indicate  the question this fact  is
trying answer, and  provide some idea  of the interpreter  which would  be
involved.

	Notation:

1. "U:S" refers to value of the S slot of the unit, U.

2. A unit will be shown as

   TheUnit
     Slot-1:	Value-1
     Slot-2:	Value-2
       :		  :
     Slot-N:	Value-N

  where Slot-i is the name of a slot, whose value is  Value-i. (i.e.
  TheUnit:Slot-i = Value-i). This unit, TheUnit, may also have other slots not
  shown.

3. Specifying some Value-i using the form "[...]" means
  I am writing an easy to read description, rather than one this RLL would actually
  be able to understand.

4. Units of the form ___#0032 are used to indicate an arbitrarily named
  (ala GENSYMed) unit,
  whose interpretation should be clear from information found on this unit.

	Naming convention:

 Any--- refers to the class of ---'s  (eg AnyBuilding)
 Typical--- refers to a typical member of such a class (eg the facts
	stored on TypicalBuilding are defaults for what to expect for buildings
	in general.)
 My--- refers to a syntactic slot.

	References:
[Greiner] - "RLL-1, A Representation Language Language", HPP-Memo-9, September 1980
[Lenat, Hayes-Roth, Klahr] - "Cognitive Economy", 6-IJCAI
[Stefik] - PhD Thesis, Stanford University, June 1979.

!	***** Fact # 1.*****
1.  M6-3 feeds into M6-2.

Q: What does M6-3 feed into?

Solution: (GetValue 'M6-3 'FeedsInto), which returns M6-2.

Start with units representing M6-3 and M6-2 (each of which is a member  of
AnyManhole); and then set the  value of the FeedsInto  slot of M6-3 to  be
the value of M6-2 -- i.e.  M6-3:FeedsInto ← M6-2.

   M6-3
     Isa:	(AnyManhole)
     FeedsInto:	(M6-2)

   M6-2
     Isa:	(AnyManhole)
     FeedsFrom:	(M6-3)

   AnyManhole
     Examples:		(M6-2, M6-3, ...)
     Description:	This represents the class of all manholes.

We additionally must  have a  unit to  represent the  "FeedsInto" type  of
slot.  This too is easy  - and this bundle  of knowledge holds facts  like
The value of x:FeedsInto must be a list of manholes, the only x for  which
x:FeedsInto is defined  is a manhole,  and that if  x:FeedsInto = y,  then
y:FeedsFrom = x (i.e. FeedsInto:Inverse = FeedsFrom).

   FeedsInto
     Isa:		(AnySlot)
     Description:	This slot maps from manholes to manholes.
     Inverse:		FeedsFrom
     Definition:	(λ (u ) [Find the pipe, P, to which this manhole connects.
				 See which pipes, p-i, this P connects to.
				 Return the list m-i, where each m-i is connected
				 to pipe p-i.]

Note - the definition of a slot indicates how to deduce its value --  i.e.
S:Definition is a function, F,  whose value, (F u),  is the value to  fill
u:S.


!	***** Fact # 2.*****
2. All permanent storage tanks are diked.

[Question: Is this to say by definition all permanent storage tanks are diked,
 or that they just happen to be?
 I wil assume the latter: ]

A quick and dirty solution is:

   AnyPermanentStorageTank
     Description:	This represents the class of all permanent storage tanks
     TypicalExample:	TypicalPermanentStorageTank
     SubClass:		AnyStorageTank

   TypicalPermanentStorageTank
     Description:	This represents a typical permanent storage tanks --
   			 its values are default values for p.s.t.'s in general
     TypicalExampleOf:	AnyPermanentStorageTank
     Diked?:	      	T

  [Note that the Description, TypicalExample and TypicalExampleOf slots are already
defined in RLL; and I'm assuming the boolean value "Diked?" slot was already in
existence when this question was posed.]

This solution does require the existence of two seemingly unnatural units,
one to represent the  class of permanent storage  tanks, and the other  to
describe the type  permanent storage  tank.  Another problem  is that  the
slots on that TypicalPermanentStorageTank  are really supposed to  contain
default values of these p.s.t.s, not definitional one.  Another  solution,
which makes these things explicit, uses a (universal) variable:  (It  also
spares   us    from    having    to   create    the    fairly    unnatural
...PermanentStorageTanks units)

   AnyStorageTank
     Description:	This represents the class of all permanent storage tanks
     UnivElements:	(x)

   x
     UnivIsa:		(AnyStorageTank)
     MyDefiningSlots:	(UnivIsa TimeOfStorage)
     MyAssertionalSlots: (Diked?)
     Description:	All facts stored here are true for ALL p.s.t.'s.
     TimeOfStorage:	Permanent
     Diked?:		T

The slots of the x unit above indicate that x is defined as a  universally
quantified member of AnyStorageTank which is permanent. Furthermore,  each
such member is diked. In predicate calculus:

	∀x. x:UnivIsa = (AnyStorageTank) &  x:TimeOfStorage = Permanent 
		=> x:Diked? = T

The    matching    algorithm    knows    to    examine    the    set    of
AnyStorageTank:UnivElements whenever  some  question is  asked  concerning
members of some subset of this class.

!	***** Fact # 3.*****
3. Oil spilling into water causes a sheen.
  Here we need the oft-lampooned event units:

   Event#1
     Description:	This refers to the event of oil spilling into water.
     GenlE:		(Event#3, ...)
     CausesE:		(Event#2)
     Substance1:	Oil
     Substance2:	Water

   Event#2
     Description:	This refers to the event of a sheen forming on water
     GenlE:		(Event#4, ...)
     CausedByE:		(Event#1)
     OnSubstance:	Water
     Appearance:	Sheen

For perspective, we include a few over the more general units, referred to above:

   Event#3
     Description:	This refers to the event of one fluid enterring another.
     GenlE:		(Event#5, ...)
     SpecE:		(Event#1, ...)
     Substance1:	[Any fluid]
     Substance2:	[Any fluid]

   Event#4
     Description:	This refers to the event of some change in appearance of
	  		   some quantity liquid.
     GenlE:		(Event#3, ...)
     SpecE:		(Event#2, ...)
     OnSubstance:	[Any fluid]
     Appearance:	[Any feature]

The various  event-relating  slots,  such as  GenlE,  SpecE,  CausesE  and
CausedByE, are already defined;  (see bottom of page)  and point to  event
units which are  more general, more  specific, the results  and causes  of
some event unit.  Note all of these may point to a set of values -- i.e. a
unit representing Event#2  would also  qualify as a  specialization of  an
event referring to a sheen forming on some arbitrary liquid.

The slots Substance1  and Substance2 would  have to be  defined, as  would
OnSubstance and  Appearance.  While  we would  not expect  any general  KB
(knowledge base) to come  equipped with things as  specific as Event#1  or
Event#2, we  might  expect to  find  something like  Event#3,  and  almost
certainly Event#4.   Even something  like  Event#4 is  not as  general  as
possible -  it  specializes  the  ChangeInSomeSubstance  event  unit,  for
example.

Two final comments: 1) The  Event#2-- units are by  no means the limit  of
specificity:  It is more general than OilFromPipe93SpillingIntoWOC, which,
in      turn,       can       be       further       restricted       into
MachineOilFromPipe93SpillingIntoWOCAfterOutFall93,    (which    can     be
restricted  to   DayOldMachineOil#2FromPipe93'sFirstOutletSpillingIntoWOC-
AfterOutFall93onAug23, ...)  The  second point, which  was never  conveyed
during the  workshop,  was  that  these  units  are  not  simply  produced
arbitrarily.  The idea is, whenever you  have something to say about  some
event, one can then create a unit  to hold this information.  If the  fact
is general,  it should  go on  a  general event  unit, which  enables  its
descendants to inherit these facts, via  the GenlE link.  For example,  we
need to store the fact that "Oil spilling into water causes a sheen"  just
once, (shown above); and we will now expect to find a sheen on every water
area associated with each more specialized unit.

Each of the slot  types SpecE, GenlE, CausesE  and CausedBy have the  same
basic form:  e.g.:

   GenlE
     Description:	This points from an event to those events which are more
	  		  general -- i.e. have fewer specifics specified.
     Datatype:		[Descendant of AnyEvent]
     Format:		[Set of values]
     MakeSenseFor:	[Descendant of AnyEvent]
     Inverse:		SpecE

!	***** Fact # 4.*****
4. The types of countermeasures taken are a boom at Wier1 and skimmer at
Wier2.

Here we can use the same event units, using the PreventedByE slot to point
from this  problem  (which requires  these  countermeasures) to  the  list
(Event#8 Event#9).
(Note PreventedByE:Inverse = InhibitsE.)

   Event#7
     Description:	This is the problem which necessitated these countermeasures
   		 - eg an oil spill at some location.
     PreventedByE:	(Event#8 Event#9)

   Event#8
     Description:	Place a boom at Wier1.
     InhibitesE:	(Event#7)
     GenlE:		(Event#10)
     Object:		Boom#003
     Location:		Weir1

   Event#9
     Description:	Place a skimmer at Wier2.
     InhibitesE:	(Event#7)
     GenlE:		(Event#11)
     Object:		Skimmer#005
     Location:		Weir2


For context, we include the more general event units:

   Event#10
     Description:	Place a boom somewhere.
     SpecE:		(Event#8)
     GenlE:		(Event#12)
     Object:		[Any Boom]
     Location:		[Any Location]

   Event#11
     Description:	Place a skimmer somewhere.
     SpecE:		(Event#9)
     GenlE:		(Event#12)
     Object:		[Any Skimmer]
     Location:		[Any Location]

   Event#12
     Description:	Place an object somewhere.
     SpecE:		(Event#10 Event#11)
     Object:		[Any Object]
     Location:		[Any Location]

!	***** Fact # 5.*****
5. Oil sometimes spills out of broken machinery.

The real problem here is the  modality "sometimes". To represent this,  we
use the PossiblyCausesE (resp. PossiblyCausedByE)  type of slot, which  is
analogous to the  CausesE (resp.  PossiblyCausesE) type of  slot; and,  in
fact, y ε x:CausesE => y ε x:PossiblyCausesE.  (resp.  y ε x:CausedByE  =>
y  ε   x:PossiblyCausedByE).    Of   course,   PossiblyCausesE:Inverse   =
PossiblyCausedBy.

   Event#13
     Description:	The event of a machine breaking.
     PossiblyCausesE:	(Event#14)
     GenlE:	   	(Event#15)
     AffectedObj:	[Any machine]
     WhatHappened:	"It broke"

   Event#14
     Description:	The event of oil spilling from a machine.
     PossiblyCausedByE: (Event#13)
     GenlE:	      	(Event#17)
     Substance:	  	[Any oil]
     FromLocation:	[Any machine]


As usual, the more general objects:
   Event#15
     Description:  The event of some object breaking.
     GenlE:	   (Event#16)
     SpecE:	   (Event#13)
     AffectedObj:  [Any object]
     WhatHappened: "It broke"


   Event#16
     Description:   The event of something spontaneously happening to an object.
			i.e NOT due to some external reasons 
			(so "old age" would qualify)
     SpecE:	   (Event#13)
     AffectedObj:  [Any object]
     WhatHappened: "It broke"

   Event#17
     Description:  The event of some liquid spilling.
     GenlE:	   (Event#18)
     SpecE:	   (Event#14)
     Substance:	   [Any liquid]
     FromLocation: [Any location]

!	***** Fact # 6.*****
6. Many inventory lists are incomplete.

Here, the problem is with the "many" modifier.
A solution which begs that subissue is:

   AnyInventoryList
     Description:		This represents the class of all inventory lists.
     IntentionalSubClass:	(AnyIncompleteInventoryList)

   AnyIncompleteInventoryList
     Description:	This represents the class of all incomplete inventory lists.
     SuperClass:	(AnyInventoryList)
     TypicalExample:	TypicalIIL
     RelativeSize:	Many
     MyDefiningSlots:	(SuperClass TypicalExample)
     MyAssertionalSlots: (RelativeSize)

   TypicalIIL
     Description:	This represents a typical incomplete inventory list.
     TypicalExampleOf: 	AnyIncompleteInventoryList
     Complete?:		False

The semantics of IntentionalSubClass implies AnyIncompleteInventoryList  ⊂
AnyInventoryList, and each member has some additional constraints.   (This
same mechanism  could have  been used  to  state that,  say, 83%,  of  all
inventory lists are incomplete.)

!	***** Fact # 7.*****
7. Water flows through a pipe at 0.5 ft/sec.

   TypicalPipe
     Description:	This represents the typical pipe.
     TypicalExampleOf:	AnyPipe
     FlowRate:		0.5 ft/sec

Purist may argue that all I've stated  is that the default speed of  water
flow is 0.5ft/sec.  Well, that depends  on the definition  of FlowRate  --
perhaps this slot  is defined  such that the  value of  U:FlowRate is  the
value of T:FlowRate, where T  is the prototypes (i.e. nearest  appropriate
typical example) to U, whenever U is an individual; or otherwise the value
actually stored on the U unit otherwise.

Alternatively, we could create  a new variable, x,  which is a  "universal
member" of  AnyPipe, and  show  here that  the  flow rate  is  necessarily
0.5ft/sec:


   AnyPipe
     Description:  This represents the class of all pipes.
     UnivElements: (x)

   x
     UnivIsa:	       (AnyPipe)
     MyDefiningSlots:  (UnivIsa)
     MyRefiningSlots:  (FlowRate)
     Description:      Facts stored here are true for ALL pipes.
     FlowRate:	       0.5 ft/sec

!	***** Fact # 8.*****
8. If the chemical is HF, then tell the observer not to breathe it.

   Rule#332
     Isa:		(AnyRule)
     IfTrulyRelevant:	((EQ 'Chemical `HF))
     ThenTellUser:	"Do not breath chemical!!"
     Priority:		High
     OnTask:		ImminentDanger

   ImminentDanger
     Description: This task tries to find if the current situation is
	  		 dangerous; and if so, suggest solutions/fixes/alternatives.
     Isa:	  (AnyTask)
     RuleList:	  (Rule#332, ...)

Of course, to see how this would work would require showing the interpreter
which would process this rule 
(as well as the interpreter used for this task).
This information is stored explicitly (and fairly declaritively) on various
units -- for example, TypicalTask and TypicalRule have expressions which serve
as default values for the components of such functions,
while the units which represent various
slots, such as HowToProcess, describe the details relating how to assemble these
functions. See the RLL memo [Greiner] for further specifics.

!	***** Fact # 9.*****
9. When attempting to find the source of a spill in the creek, look for the
chemical in the manhole nearest the creek.

   LocateSource
     Isa:		(AnyTask)
     Description:	This task tries to find the source of a spill.
     RuleSet:		(Rule#209...)

   Rule#209
     Isa:		(AnyRule)
     OnTask:		(LocateSource)
     Description:	This rule suggests how to find the source, by looking in
			 an appropriate manhole.
     IfPotentiallyRelevent:	((Unknown 'SourceLoc))
     IfTrulyRelevent:	((EQ 'SpillObserved 'Creek))
     ThenTellUser:	"Look in the manhole nearest the creek."

NOTE: If the purpose of the rule was to actually determine which manhole to examine,
it might make sense to change Rule#209 by replacing its ThenTellUser with
"Look in the manhole nearest the creek. I'll tell you which manhole in a second."
and adding the slot
     ThenAddToAgenda:	(FindBestManhole),

where FindBestManhole is a task which will tell the user which manhole to examine,
if necessary.

!	***** Fact # 10.*****
10. For all oil spills, if there are limited human resources, do the containment
before locating the source.

   Rule#261
     Description:	    This is used to order tasks, for the spill problem.
     IfPotentiallyRelevant: [AND (Trying to order tasks)
				 (This is a spill problem)]
     IfTrulyRelevant:	    ((EQ 'ManPower 'Limited)
			     (EQ 'SpillType 'Oil))
     ThenOrderTasks:	    [Put Containment before LocateSource]
     OnTask:		    OrderTasksForSpillProblem
     Specificity:	    200

   Containment
     Description:  This task suggests ways to contain a spill.
     Isa:	   (AnyTask)
     Priority:	   369

   LocateSource
     Description:   This task tries to find the location of a spill.
     Isa:	    (AnyTask)
     Priority:	    44

   OrderTasksForSpillProblem
     Description: This system-level task is responsible for ordering the tasks run.
		    Note it is in another, meta-level, bin from the other
		    tasks, and so its priority level is incomparable with theirs.
     Isa:	  (AnyTask)
     Priority:	  210

[Specificity will be defined on the  next page.]  In fact, Rule#261  might
have  been  derived  from  a  more  general  heuristic,  which   suggested
performing the more urgent things before the less timely; and noting  this
heuristic is especially relevant when there  is reason to believe not  all
of the tasks will get a chance.

   Rule#21
     Description:	    This is used to order tasks.
     IfPotentiallyRelevant: [AND (Trying to order tasks)
				 (Not all tasks will be done)]
     ThenOrderTasks:	    [Put tasks in order of ascending priority.]

Note the  "Priority" slot  may be  computed  from a  host of  things.   It
should, for  example, use  the fact  that the  spill type  is oil  in  its
calculation; such that the the relative Priority of Containment is  higher
than LocateSource.   In  fact,  some  rule like  Rule#261  might  even  be
detrimental in cases where  you need to determine  the source to learn  of
the type  of  spill, as  this  rule might  end  up postponing  this  spill
location task.

Better whould be using the priority scheme as it stands, and have Rule#261
simply increase the value of  Containment:Priority, whenever the spill  is
oil; and increase it more when human resources are scarce.

Needless to say,  much of  the smarts  of this  comes from  the fact  that
EURISKO, (on top of RLL,) knows when to run these various rules - but this
too is described in the units, (as well as in the RLL memo [Greiner].)

!	***** Fact # 11.*****
11. If the spill is gushing, locate the source before trying to contain it.

Again, the difficulty  of this rule  is primarily in  assuring that it  be
called in correct time -- ie when  deciding which task to next run.  This,
of course, requires knowledge of how the interpreter must work - as one of
the interpreter's chores is finding the rules necessary to determine which
tasks to process, and in what order.  Modulo this, we would represent  the
above fact:

   Rule#443
     Description:	    This is used to order tasks, for the spill problem.
     IfPotentiallyRelevant: [AND (Trying to order tasks)
				 (This is a spill problem)]
     IfTrulyRelevant:	    ((EQ 'CurrentState 'Gushing))
     ThenOrderTasks: 	    [Put Locatesource before Containment]
     OnTask:		    OrderTasksForSpillProblem
     Specificity:	    523

Things like the conflict resolution  scheme become important if ever  both
this Rule#443 and the earlier  Rule#261 are triggered. One solution  would
be to use the more specific. So here, seeing Rule#443:Specificity = 523 is
greater that Rule#261:Specificity  = 200, we  take Rule#443's advice  over
Rule#261.  This specificity should be calculated  - but how to do that  is
another problem.  (Alternatively we  could assign such values  statically,
for a simpler, but less interesting solution.)

Again, it would seem better to use the approach mentioned in the  previous
page; which, here, would have  Rule#443 actually diminish the Priority  of
Containment as one of its Then parts; rather than use the  ThenOrderTasks.
Rule#443 was  shown in  the form  above  only because  this is  a  "direct
translation" of the fact posed.

Note that in RLL all of these schemes can be implemented; and the user  is
left only with the task  of deciding which is  the most appropriate --  as
opposed to how  can I bend  the single control  system to accomodate  this
statement.

!	***** Fact # 12.*****
12. If a flammable liquid is spilled, call the fire department.

This is fairly straightforward:

   Rule#391
     Description:	    This suggests calling the fire dept if liquid flammable.
     IfPotentiallyRelevant: ((Known 'SpilledLiquid))
     IfTrulyRelevant:	    ((Apply 'Flammable 'SpilledLiquid))
     ThenTellUser:	    "Call fire department!"
     OnTask:		    ImminentDanger
     Specificity:	    854

!	***** Fact # 13.*****
13. Any two sources will flow into a common pipe.

I interpret this fact as meaning:
∀ s1, s2 ε Sources. ∃ p0 ε Pipes. FlowInto(s1 p0) & FlowsInto(s2 p0).

   AnySource
     Description:	This refers to the class of all sources.
     UnivElements:	(s1 s2)

   AnyPipe
     Description:	This refers to the class of all pipes.
     ExistElements:	(p0)

   p0
     Description:	This is the skolem variable, p0 = f(s1, s2) mentioned above.
     ExistIsa:		(AnyPipe)
     FlowsFrom:		(s1 s2)
     SkolemOf:		(s1 s2)
     MyDefiningSlots:	(ExistIsa FlowsFrom)

   s1
     Description:	This is one of the sources mentioned above.
     UnivIsa:		(AnySource)
     FlowsInto:		p0
     MyDefiningSlots:	(UnivIsa)

   s2
     Description:	This is the other source mentioned above.
     UnivIsa:		(AnySource)
     FlowsInto:		p0
     MyDefiningSlots:	(UnivIsa)

Note - ExistIsa and ExistElements are the existential counterparts to
UnivIsa and UnivElements.

!	***** Fact # 14.*****
14. There's a tank outside building 3035 with a dike around it.

I interpret this data as meaning
∃ t1 ε Tank. Outside(t1 B3035) & Dike?(t1 T).

   AnyTank
     Description:	This refers to the class of all tanks.
     ExistElements:	(t1)

   t1
     Description:	This is that tank.
     ExistIsa:		(AnyTank)
     OutsideBldg:	B3035
     Diked?:		T
     MyDefiningSlots:	(ExistIsa OutsideBldg Diked?)


A second interpretation is
∀ t1 ε Tank. Outside(t1 B3035) => Dike?(t1 T).

   AnyTank
     Description:	This refers to the class of all tanks.
     UnivElements:	(tu)

   tu
     Description:	This is any tank outside building 3035.
     UnivIsa:		(AnyTank)
     OutsideBldg:	B3035
     MyDefiningSlots:	(UnivIsa OutsideBldg)
     MyAssertionalSlots: (Diked?)
     Diked?:		T

Note this confusion is a linguistic problem, due to English's imprecision.

!	***** Fact # 15.*****
15. Rain onto oil-contaminated soil causes oil flushing.

Back to events:

   Event#18
     Description:	This refers to rain on oil-contaminated soil.
     CausesE:		(Event#19)

   Event#19
     Description:	This refers to oil flushing.
     PossibleCausesE:	(Event#18 ...)

We could add the usual context -  eg rain on soil, which is more  specific
than rain on ground, which  is a special case  of water on solid  surface;
but such a digression should not be necessary anymore.

!	***** Fact # 16.*****
16. The types of oil stored in building 3035 are X, Y, and Z.

   B3035
     Isa:		(AnyBuilding)
     Description:	This refers to building 3035.
     HousesOils:	(X Y Z)

   X
     Isa:		(AnyOil)
     HousedIn:		B3035

   Y
     Isa:		(AnyOil)
     HousedIn:		B3035

   Z
     Isa:		(AnyOil)
     HousedIn:		B3035

I suspect this question  was asking about some  trickier sort of thing  --
dealing perhaps  with quantities  of fluid  entities. However,  unable  to
determine what exactly that was, ...

!	***** Fact # 17.*****
17. Buildings in WOC-6 basin are 3035, 3022, and 3195.

This fact could be physically stored on the WOC-6 unit,
if there is a great need for this particular fact.
Alternatively, it could be "virtually" there - that is, derivable whenever
anyone asks for this fact, but absent until then.
A way of combining both would use the "BuildingsInBasin" virtual slot, which
has an algorithm describing how to fill in its value:

   WOC-6
     Isa:		(AnyOutFall)
     BuildingsInBasin:	(B3035 B3022 B3195)

   BuildingsInBasin
     Isa:		(AnySlot)
     Definition:	[Here is a specification describing how to determine
			   the value of U:BuildingsInBasin, given U, a unit
			   which represents an outfall.]
    

!	***** Fact # 18.*****
18. Container 86 has a capacity of 100 gallons and usually empties at a
	rate of 2 gallons/day.

The obvious question about this fact is what the "usually" modifier is meant
to mean. 

Interpretation 1: The emptying-rate is a function of the amount of fluid
	which is now in the container; and the average of all volumes 
	(perhaps weighted by the frequency of occurance) is 2 gallons/day.

Interpretation 2: Left to itself, the emptying rate will be 2 gallons/day.
	However, if some pump is extracting the content, that rate will be
	more; or if some pump is adding fluid, that rate will be less.

By cases, the solutions are

Case 1:

   Container86
     Capacity:		  100 gallons
     EmptyingRate:	  ?
     CurrentFilledVolume: y

   EmptyingRate
     Isa:	  (AnySlot)
     Definition:  [To compute U:EmptyingRate, apply Fn#803
			to U:CurrentFilledVolume.
			If this value is fairly stable, consider caching this value
			in U:EmptyingRate. Otherwise don't bother.]

Case 2:

   Container86
     Capacity:		      100 gallons
     EmptyingRate:	      (SeeUnit EmptyingRateOfContainer86)
     MitigatingCircumstances: y

   EmptyingRateOfContainer86
     Description:	This subunit is devoted to storing information about
			  the value of the EmptyingRate of Container86.
     *vaLue*:		?
     UsualValue:	2 gallons/day
     ToComputeValue:	[Examine Container86:MitigatingCircumstances to determine
			 whether the value should be the default, of 2 gallons/day,
			 or some other value.]

Note the existing retrieval functions will do the right things with this SeeUnit
pointer to a subunit. (The smarts for this is in the "SeeUnit" unit, of course.)

There is no reasons the above two cases could not be combined, if that is
called for.

!	***** Fact # 19.*****
19. If the person doing the backtracking can no longer see the chemical, stop
	backtracking.

   Rule#555
     Isa:		(AnyRule)
     OnTask:		BackTracking
     IfPotentiallyRelevant:	((Unknown 'SourceLocation))
     IfTrulyRelevant:	(AND [All potential sights have been checked]
			     [Chemical not found] )
     ThenTellUser:	"Lost the trace. Stop backtracking."
		"(Unless you are sure about all the locations you've reported, start
		 backtracking, in AI sense.)") )
     ThenControl:	[Terminate BackTracking task, with failure]

!	***** Fact # 20.*****
20. If you want to find the source of the spill, select a person to look for it.

   Rule#666
     Isa:		(AnyRule)
     OnTask:		LocateSource
     IfPotentiallyRelevant: ((Unknown 'SourceLocation) )
     ThenAddToAgenda:	(SelectUser)

   SelectUser
     Isa:		(AnyTask)
     Description:	This task is responsible for selecting some one to look for
			 the source of a spill.
     RuleSet:		...

!	***** Fact # 21.*****
21. There is a trade-off between the severity of the hazard and the number of
	simultaneous tasks attempted:  the more severe, the more tasks.

[This is a trade off?]

Rather  than  explicitly  represent  this  relation,  I  would  prefer  to
de-compile this  down to  more  primitive facts:   For example,  we  could
assign an importance weight to the  task at hand; and use this  importance
weight to determine  the number  of simultaneous tasks  to attempt.   This
importance measure would, in turn, be based on the severity of the current
hazard -  to realize  the  above condition.   This  could be  stated  more
directly by  showing  that  the  severity-of-hazard  to  importance-weight
function is  monotonically  increasing;  as is  the  importance-weight  to
nummber-of-simultaneous-tasks function.  The  basic inferencing  algorithm
would then reason that their composition would be monotonic as well; which
is  the   "trade-off"   condition.    SeverityOfHazard   Isa:    (AnySlot)
Description:  This maps  a task to  a numeric value,  which indicates  how
severe the current hazard is.


   ImportanceMeasure
     Isa:		(AnySlot)
     Description:	This maps a task to a numeric value, which indicates
			 how importance this task is to attempt/succeed.
     HighLevelDefn:	(Composition SH-IM SeverityOfHazard)
     Definition:	(λ (u) (SH-IM (GetValue u 'SeverityOfHazard))


   NumberOfSimulTasks
     Isa:		(AnySlot)
     Description:	This indicate how many subtasks we can justify assigning
			 at once.
     Definition:	(λ (u) (IM-NoST (GetValue u 'ImportanceMeasure))


   SH-IM
     Isa:		(AnyFunction)
     Description:	This function maps from SeverityOfHazard to
			 ImportanceMeasure, for a given task.
     Definition:	(λ (x) [...]
     Monotonic?		Increasing


   IM-NoST
     Isa:		(AnyFunction)
     Description:	This function maps from ImportanceMeasure
			 NumberOfSimulTasks, for a given task.
     Definition:	(λ (x) [...]
     Monotonic?		Increasing


!	***** Fact # 22.*****
22. If two people's description of a spill are inconsistent, prefer the second
	description.

To represent:
∀ sd1, sd2 ε SpillDescript. AppliesTo(sd1 ThisProblem) & AppliesTo(sd2 ThisProblem)
	& Inconsistent(sd1 sd2) =>
	IF [(GetValue sd1 'TimeRecorded) > (GetValue sd2 'TimeRecorded)]
		THEN Believe( sd1 )
		ELSE Believe( sd2 ).

[This intentionally does NOT address the question of how to determine when
two descriptions are inconsistent.]

  MostRecentDescription
     Isa:		(AnySlot)
     Description:	This slot returns the single description to be believed.
     HighLevelDefn:	(ApplyingFn CAR (PutInOrder AllDescriptions TimeRecorded))

Note the  (PutInOrder  AllDescriptions TimeRecorded)  returns  a  function
which takes a unit, u, as an  argument, and returns a list of the  values,
(v1 ... vN), where each vi appeared in u:AllDescription, and where i>j  =>
vi:TimeRecorded > vj:TimeRecorded.   Taking the  CAR of  that returns  the
most recent description.

This also  does  not  consider  performing  this  process  only  when  two
descriptions are inconsistent - this could be acheived using the following
rule:

   Rule#452
     Isa:		(AnyRule)
     OnTask:		SpillCharacterization
     IfTrulyRelevant:	((Apply 'Inconsistent 'AllDescriptions))
     ThenModifyUnit:	[Use the value of uContext:MostRecentDescription rather
			 than uContext:AllDescriptions]

!	***** Fact # 23.*****
23. The OHMTADS mnemonic for corrosiveness is "COR".

   Corrosiveness
     Isa:		(AnyProperty)
     OHMTADSmnen:	"COR"

!	***** Fact # 24.*****
24. Fuel oil spills often come from construction equipment.

1.
   Rule#231
     Description:	This says to suspect construction equipment if some
			 fuel oil is found.
     IfPotentiallyRelevant: ((Isa 'SpilledLiquid 'AnyFuelOil))
     IfAskUser:		"Has there been any construction equipment around there
			 recently?"
     ThenTellUser:	"That piece of equipment might have spilt that oil."

2.
We could first have written a general rule, which indicated
"IF	 SubstanceX is part of ObjectY,
     AND SubstanceX is found out of place
     AND ObjectY had been around that place
 THEN Suspect ObjectY is source of SubstanceX."

With this, some knowledge that spills are liquids out of place, and the following
units, we might have expected EURISKO to build up Rule#231.

   AnyConstructionEquipment
     Description:	This describes the class of construction equipment.
     TypicalExample:	TypicalCE

   TypicalCE
     Description:	This refers to a typical piece construction equipment.
     TypicalExampleOf:	AnyConstructionEquipment
     Components:	[Fuel oil, nuts, bolts, ...]

3. A third solution uses Events:

   Event#20
     Description:	The event of construction equipment being around.
     PossiblyCausesE:	(Event#21)
     PossiblyCausesE*:	(Event#21 Event#22)

   Event#21
     Description:	The event of FUEL oil spilling from construction equipment.
     PossiblyCausedByE:	(Event#20)
     PossiblyCausesE:	(Event#22)
     GenlE:	      	(Event#14)
     Substance:	  	[Any fuel oil]
     FromLocation:	[Any piece of construction equipment]

  Event#22
     Description:	The event of finding a Fuel Oil Spills.
     PossiblyCausedByE:	(Event#21)

For context, we include the Event#14 which was used for Fact 6 also.

   Event#14
     Description:	The event of oil spilling from a machine.
     PossiblyCausedByE: (Event#13)
     GenlE:	      	(Event#17)
     Substance:	  	[Any oil]
     FromLocation:	[Any machine]

Finally, PossiblyCausesE* is the transitive closure of the PossiblyCausesE relation.

!	***** Fact # 25.*****
25. Attempt to minimize the number of persons called on-site.

The general problem solving strategy is as follows:
1. Find all feasible solution plans.
2. Select the one the best plan, and execute that.

[Note this will be true for sub-parts of this plan as well.]
Step 2. can be done by the Definition of BestPlan -

   BestPlan
     Isa:		(AnySlot)
     Description:	This selects from P:AllPlans (where P is a problem to be
			 solved) the best member -- ie the value of sεP:AllPlans
			 which maximizes the value of s:GoodNess
     HighLevelDefn:	(ApplyingFn MAX AllPlans)

This requires a sophisticated GoodNess slot -

   GoodNess
     Isa:		(AnySlot)
     Description:	This assigns to each solution plan, s, a numeric value.
			  Note this measure varies from class of problem to class
			  of problem.
     HighLevelDefn:	(OneOf SpillProblemGoodNess ... )

   SpillProblemGoodNess
     Isa:		(AnySlot)
     Description:	This assigns a numeric value to each SPILL solution plan, s.
     HighLevelDefn:	(VariesWith (ApplyingFn MINUS PeopleCalledOnSite) ...)
     Definition:	(λ (u ) [... when u refers to a spill problem, lower
			 u:GoodNess as u:PeopleCalledOnSite increases....]

Note - OneOf, VariesWith and ApplyingFn are all slot combiners,
which each does the appropriate thing.

Note this shows that ONE criteria for selection is consideration of the number
of people called on site (this PeopleCalledOnSite slot, of course, must be defined
as well.)

!	***** Fact # 26.*****
26. If a heuristic calls for an action you can't currently perform, ignore
	the heuristic.

[Should we descriminate between heuristics & hard-and-fast rules?
I'm assuming not.]

The function which actually interprets the various rules is itself built from
(meta-)rules, which should include this one (at least when considering
each SPILL rule.)

   Rule#M325
     Isa:		(AnyMetaRule)
     IfPotentiallyRelevent: [Currently considered rule is for a Spill rule]
     IfTrulyRelevent:	[(CannotDo 'ThenDoAction)]
     ThenTellUser:	(CONCAT "I'm not even considering the " uThisRule
				" rule, as I can't perform its action, anyway.")
     ThenModifyRuleSet:	[Delete uThisRule from current rule set.]


AnyMetaRule ⊂ AnyRule

!	***** Fact # 27.*****
27. If a hazardous substance has spilled, don't worry about human resource
	limitations.

Basically, this Rule#777 instructs the SpillProblemGoodNess to ignore the value
of the problem's PeopleCalledOnSite slot when the substance is hazardous.

   Rule#777
     Isa:		(AnyRule)
     OnTasks:		[Every SPILL task]
     IfPotentiallyRelevant: ((Apply Harzardous? 'Substance) )
     ThenModifyUnit:	[Redefine PeopleCalledOnSite so it always returns
				"Irrelevant"]

∂25 September 1980 1628-EDT  John.McDermott 	description of ops5


                               OVERVIEW OF OPS5

                           Draft of September, 1980

                           C. Forgy and J. McDermott



                                 INTRODUCTION

OPS5  is  a  production  system language; that is, it is a programming language
having only one kind of instruction,  the  production.    A  production,  which
consists of a list of conditions and a list of actions, can be thought of as an
assertion  that its list of actions can be performed when all of its conditions
are satisfied.  The OPS5 interpreter executes a production system by performing
the following operations.

   1. Determine which productions  have  all  their  conditions  satisfed.
      (This step is called match.)

   2. If   no  productions  have  all  their  conditions  satisfied,  halt
      execution.  Otherwise, select one production that does.  (This  step
      is called conflict resolution.)

   3. Perform  the  actions  of  the  selected  production.  (This step is
      called act.)

   4. Goto 1.

It is worth noting that the control  in  a  production  system  interpreter  is
strictly  non-hierarchical;  the  selected  production  executes  to completion
before the next production is selected.

                                WORKING MEMORY

The data that the productions operate on is held in a global data  base  called
working  memory.    The elements of an OPS5 working memory are considered to be
constant expressions; the interpreter does  not  ascribe  any  meaning  to  the
symbols  composing  the expressions.  OPS5 working memory elements are vectors,
which can contain numbers and atoms like those of LISP.  The following element,
a vector of three atoms, is typical.

    (TASK-ORDER SOURCE-LOCATION CONTAINMENT)

The lengths of the vectors can vary dynamically at run time.

Although vectors are convenient for a few purposes (such  as  representing  the
queue  of  tasks shown above) for most purposes custom tailored representations
are preferable.  OPS5 supports a data structuring facility like record classes,
which allows one to declare that certain elements contain named  fields.    For
example,  the  following declares that elements of type MATERIAL contain fields
NAME, CLASS, HAZARD, and COLOR.

    (RECORD MATERIAL
            NAME
            CLASS
            HAZARD
            COLOR)

A typical object of class MATERIAL is

    (MATERIAL ↑NAME H2SO4 ↑COLOR COLORLESS ↑CLASS ACID)

The ↑ is the OPS5 operator that distinguishes field names from ordinary  atoms.
The  order of specifying the fields is unimportant, and it is not necessary for
every field in an element to be filled.

                                  CONDITIONS

The conditions in productions are forms like the  elements  in  working  memory
except  that  conditions  may  contain certain special symbols.  Two simple but
typical conditions are

    (TASK-ORDER <FIRST>)

and

    (MATERIAL ↑CLASS ACID ↑NAME <MAT>)

The symbols that begin with < and end with > are variables; the  other  symbols
here are interpreted as they would be in working memory elements.

The  interpreter  determines  whether a condition "matches" (i.e., is satisfied
by) a working memory element by comparing the subelements of the  condition  to
the  corresponding subelements of the working memory element.  Every subelement
of  the  condition  element  must  match  the  corresponding   working   memory
subelement.  The rules for deciding whether constants and variables match are:

   - A constant symbol will match only an equal constant.

   - A  variable  will match any symbol, but if a variable occurs multiple
     times within a production, all occurrences of the variable must match
     equal symbols.

A variable is said to be bound to the element it matches.    For  example,  the
condition element

    (MATERIAL ↑CLASS ACID ↑NAME <MAT>)

would match the working memory element

    (MATERIAL ↑NAME H2SO4 ↑COLOR COLORLESS ↑CLASS ACID)

binding <MAT> to H2SO4.

OPS5  provides  a  number of operators for modifying the meaning of a condition
subelement.  Three of the operators  are  particularly  important:  the  prefix
operator, <>, and the two kinds of brackets, { } and << >>.  The first operator
is the not-equal operator.  The pair

    <> value

will match anything except what is matched by

    value

The  brackets, { }, indicate that the enclosed values are all to match the same
working memory subelement.  Thus the following

    ↑VALUE { <X> <> NIL }

would match any VALUE that was not equal to NIL and bind the  variable  <X>  to
it.    The  other  kind  of  bracket,  << >>, indicates that the working memory
subelement can match any one of the subelements within the brackets.

The condition part of a production consists of one or  more  conditions,  often
with a few of the conditions preceeded by the operator, -.  For example,

            (GOAL  ↑STATUS ACTIVE  ↑NAME DEDUCE-COUNTER-MEASURES)
            (SOURCE  ↑KIND PERMANENT-STORAGE-TANK  ↑LOCATION <AT>)
          - (COUNTER-MEASURE  ↑LOCATION <AT>  ↑KIND DIKE)

The  condition  part  as  a  whole  is satisfied when all of the conditions not
preceeded by - match  working  memory  elements  and  none  of  the  conditions
preceeded by - match working memory elements.

                                    ACTIONS

OPS5  provides  eight actions that can occur in a production's action sequence.
The four most important actions are MAKE, REMOVE,  MODIFY,  and  WRITE.    MAKE
creates and adds to working memory one new element.  For example,

    (MAKE MATERIAL ↑NAME H2SO4 ↑COLOR COLORLESS ↑CLASS ACID)

would add the element

    (MATERIAL ↑NAME H2SO4 ↑COLOR COLORLESS ↑CLASS ACID)

REMOVE deletes one or more elements from working memory.  The action

    (REMOVE 1)

would delete the element that was matched by the first condition element of the
production.    MODIFY  changes  one or more subelements of an existing element.
The action

    (MODIFY 1 ↑STATUS PENDING)

would change

    (GOAL ↑STATUS ACTIVE ↑WANT PROCESS-HAZARDOUS-SUBSTANCE)

to

    (GOAL ↑STATUS PENDING ↑WANT PROCESS-HAZARDOUS-SUBSTANCE)

WRITE types information on the user's terminal.  The action

    (WRITE (CRLF) ENTER THE NAME OF THE MATERIAL SPILLED:)

would start a new line (CRLF is a function that returns the end of line symbol)
and then type

    ENTER THE NAME OF THE MATERIAL SPILLED:

                             FORM OF A PRODUCTION

The production as a whole consists of (1) the symbol P, (2)  the  name  of  the
production,  (3)  the condition part of the production, (4) the symbol -->, and
(5) the action part of the production, with everything enclosed in parentheses.
For example,

    (P  DEDUCE-COUNTER-MEASURES
            (GOAL  ↑STATUS ACTIVE  ↑NAME DEDUCE-COUNTER-MEASURES)
            (SOURCE  ↑KIND PERMANENT-STORAGE-TANK  ↑LOCATION <AT>)
          - (COUNTER-MEASURE  ↑LOCATION <AT>  ↑KIND DIKE)
            -->
            (MAKE  COUNTER-MEASURE  ↑LOCATION <AT>  ↑KIND DIKE))
                ---------------

-------

∂9 Oct 1980 1558-PDT	Lee Erman <ERMAN at USC-ISIB> 	Hearsay-III report.
To: ESKErs: ;

                  HEARSAY-III EXPERT-SYSTEMS WORKSHOP REPORT


                          Lee Erman and Philip London
                      USC/Information Sciences Institute
                                October 9, 1980

1. Problem and Subproblem Considered



1.1. The General Problem
  We  have  tried  to  gain an understanding of and produce the beginnings of a
solution to the entire problem of crisis management in the spill  domain.    We
believe  the  central  problems  to  be those of incorporating large amounts of
diverse  knowledge  and  managing  competing  goals  (e.g.,  source   location,
identification,  notification,  countermeasures,  and  minimization of expended
resources) whose relative priorities vary as new information is acquired.

  We see several important classes of actions that the system is to perform:

   - Collect, aggregate, and present data.

   - Interpret data.

   - Notice problems and notify of their existance.

   - Allocate resources, both computer-external (e.g., personnel available
     to  take  countermeasures)  and  computer-internal  (e.g.,  accessing
     expensive databases).



1.2. The Source-location Subproblem
  In  going  into some depth on the source-location backtracking subproblem, we
made certain simplifying assumptions:

   - A single person assignable for observations,

   - continuous flow,

   - observation of type of material restricted to choice of acid, oil, or
     no pollutant, and

   - each non-negative observation to include an estimate of flow rate  of
     pollutant (gallons/minute).

  Each time the observer is free, another location (manhole, sump, or inlet) is
chosen  and  the observer is assigned to go there and make an observation.  The
factors considered in the choice of the next location include:

   - likely sources of the material (indicated from the inventory or other
     means), constrained by the current identification of the material and
     estimates of quantity,

   - the amount of information likely to be  gained  by  the  observation,
     based on the topology of the network.

   - length of transit time for observer to the location, and

   - ease of access to the location.

  Any of these kinds of information can be updated at any time and can have the
effect  of  immediately  affecting  the  backtracking  process  of the assigned
observer.

2. Resulting Design
  The critical facility needed by the system to  function  effectively  is  the
ability  to  model.    Of primary interest is modeling the ongoing state of the
crisis at hand, in the context of the Oak Ridge site.  In addition,  there  are
important  subsidiary  tasks  of  keeping  a  history  of  what  has transpired
(modeling the past) and anticipating and planning (modeling the future).  In  a
Hearsay-III-based  system,  the  natural place to build the dynamic model is on
the blackboard.  Knowledge sources can use and manipulate the model to  perform
the various actions described in the first section.

  What  we  accomplished  at  the  Workshop  in  terms  of a concrete design is
centered around the source-location backtracking subproblem.  A portion of  the
type hierarchy of the domain model relevant to that subproblem is given here:

     DomainUnit    {This is the Hearsay-III-supplied root of the tree.}
         Route
             UndergroundNetwork
         Location
             Building
             UndergroundNetNode
                 Inlet
                 Sump
                 ManHole
                 Juncture
                 OutFall
         Person
         Observer
         Event
             Observation
         Material
             Acid
             Oil
             Nothing


  In  addition,  we  specified (and partially coded) some knowledge sources for
this subproblem:

  The OBSERVATION-REQUEST-EVALUATOR knowledge source triggers  on  the  initial
detection  of  a  pollutant  in  a particular drainage basin.  Its action is to
associate with each manhole, sump, and inlet in the basin an indication of  the
desirability  of having an observation at that location.  It takes into account
likely spill sources and information gain, as described above.  This  knowledge
source  may subsequently retrigger and recalculate desirabilities if any of its
input data change, e.g., if more  is  determined  about  the  identity  of  the
pollutant,  thus changing the likelihoods of locations as sources, based on the
inventory.

  The OBSERVATION-REQUEST-ASSIGNER knowledge  source  triggers  initially  when
some  other  knowledge  source  (not described here) assigns an observer to the
task of backtracking to the source.  Its action is to assign the  observer  the
task  of  going  from  his  current  location to another location and making an
observation there.  The choice of next location is based on cost (transit  time
and ease of access) and desirability (as calculated by the OBSERVATION-REQUEST-
EVALUATOR).   The OBSERVATION-REQUEST-ASSIGNER subsequently retriggers whenever
the observer finishes his assigned observation.  It may also retrigger  if  its
inputs  (e.g.,  desirability  of  observation)  change;  in  such a case it may
reassign the observer before he has finished his current assignment.

  The  OBSERVATION-REQUEST-REEVALUATOR  knowledge  source   triggers   on   the
completion  of  an  observation  (either  assigned  by the OBSERVATION-REQUEST-
ASSIGNER or received unsolicited).  Its job is to reevaluate  the  desirability
of  observation  for  all  locations  in the basin, based on the results of the
observation.  Note that it makes  good  sense  for  this  knowledge  source  to
execute  before  the OBSERVATION-REQUEST-ASSIGNER so that that knowledge source
has updated desirabilities and is less likely to have to be  retriggered;  this
prioritization of knowledge sources is handled by scheduling knowledge sources.

3. Brief Chronology of the Development Process
  First two days:
Familiarization with the problem, probing difficulties, especially representing
time-varying  data  and  observations  not  arriving  in  chronological  order.
Finally, we dismissed the latter problem as an unnecessary complication of this
domain.

  Monday afternoon:
Started concentrating on the source-location problem, defining and  simplifying
it  and  producing  a preliminary design of the relevant portions of the domain
model.

  Tuesday morning:
"Completed" the domain model and made first pass on the knowledge  sources  for
source location.

  Tuesday P.M.:
Coding of the model and knowledge sources.

4. Strengths
  Several  of  the  strengths  of Hearsay-III were brought out by this problem.
Most important among these  is  that  Hearsay-III  supports  interaction  among
numerous,   diverse   sources   of   knowledge   and   competing   subproblems.
Additionally,  Hearsay-III  provides  great  flexibility   in   the   allowable
granularity  of  knowledge sources; this allows for grouping the knowledge into
chunks based on considerations of naturalness.

  Hearsay-III also provides for a great deal of flexibility in the modelling of
the problem domain so that we were able to explore the domain in a natural  way
without   feeling   constrained  by  built  in  restrictions  on  the  possible
representations of knowledge.

  Finally, Hearsay-III allows for the separation of performance knowledge  from
competence  knowledge.   We took advantage of this by concentrating our efforts
on the competence aspects of this problem.  We believe, although without direct
evidence here, that performance knowedge can be added at a later time  with  no
ill effects for having been delayed.  Another benefit of this separation is the
ability  to  implement  differing modes of system interaction (e.g., having the
system control the crisis management vs. advising a human manager) with minimal
changes.

5. Weaknesses
  One major weakness of the Hearsay-III  system  presented  itself  during  our
consideration  of  the  problem.    Hearsay-III  does  not provide a high level
representation language for describing the  domain  model.    A  representation
language  might  have  assisted  us  in  expressing such knowledge as taxonomic
relations  and  static  domain  information  (e.g.,  the  draiage  basin  map).
Although  Hearsay-III  does  have a reasonable internal representation for such
information, state-of-the-art representation languages such as KRL,  UNITS,  or
RLL  could  be  of  great assistance in rapidly codifying domain knowledge. The
current external Hearsay-III format for this information  is  very  Lispish  in
nature.

6. New Perceptions
  Our previous experiences with describing domain knowledge to Hearsay-III have
pointed  out  the  problems with the lack of a good external representation for
the domain model.  However, our (slightly more leisurely)  pace  in  our  other
experiences have resulted in a tolerance for this weakness. The demands of this
experiment  with respect to time constraints have clarified for us the need for
such an external representation. It is our feeling that  this  problem  can  be
fixed by adapting an existing knowledge representation language.

  Our  experience  in  working  with  this  problem  has  also  pointed out the
potential utility of relaxing the restriction that all knowledge-source actions
be written in Lisp.  Lisp as the language of KS actions could be augmented with
choices  of  other  mechanisms  (e.g.,  production  systems  adapted  to  allow
Hearsay-III blackboard interaction).  The key here is that we would  desire  to
support  such  other mechanisms as standard utilities in the system rather than
requiring a user to encode such mechanisms in his KS actions written in LISP.
  Finally,  our  perception  is  that  the  granularity  usually   chosen   for
Hearsay-III  knowledge  sources -- larger than rules in most producion systems,
but smaller than, for example,  all  of  MYCIN  --  is  an  excellent  one  for
knowledge-source interaction, and seems to be missing in most other KE tools.
-------
                ---------------
-------